Strapi deployment to Heroku - heroku

I am totally new on Strapi and Heroku. I am trying to deploy my app that is working well locally to Heroku but I am getting the following error:
2020-06-15T09:56:29.114780+00:00 app[web.1]: [2020-06-15T09:56:29.114Z] error Impossible to register the 'menus.menus' model.
2020-06-15T09:56:29.115672+00:00 app[web.1]: [2020-06-15T09:56:29.115Z] error TimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
At the beginning I thought it was a problem connecting to the database, but in my local environment it work perfectly and connect with no issues.
I even upgraded my database to a paid version in case the connection is timing out.
I also follow some answers I found only about modifying my config/environment/production/database.json as follow:
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "bookshelf",
"settings": {
"client": "postgres",
"host": "***.compute-1.amazonaws.com",
"port": "5432",
"database": "***",
"username": "***",
"password": "***",
"ssl": { "rejectUnauthorized": false }
},
"options": {
"debug": false,
"acquireConnectionTimeout": 100000,
"pool": {
"min": 0,
"max": 10,
"createTimeoutMillis": 30000,
"acquireTimeoutMillis": 600000,
"idleTimeoutMillis": 20000,
"reapIntervalMillis": 20000,
"createRetryIntervalMillis": 200
}
}
}
}
}
Any other idea of what can it be?
When I run the develop locally I got a warn (but even this the app run anyway after):
[2020-06-15T10:36:41.261Z] warn The bootstrap function is taking unusually long to execute (3500 miliseconds).
[2020-06-15T10:36:41.261Z] warn Make sure you call it?
[2020-06-15T10:36:42.476Z] warn The bootstrap function is taking unusually long to execute (3500 miliseconds).
[2020-06-15T10:36:42.476Z] warn Make sure you call it?

One simple first step option is to launch a strapi quickstart application on heroku. You can find this link here.. https://github.com/strapi/strapi
Relaunching with this method will provide you with a working, secure instance to begin development on.
Also note that heroku deploys strapi to production, so that you are not able to use the content-types editor, so it is recommended that you develop locally & test your app and use the heroku cli to update your deployment.

Related

how to proxy API requests? (Angular-CLI)

I'm working on Java project with Spring-4 and Angular-5. Session is generated on spring side.
So, I'm not able to generate this session from angular Service. It's working on Postman and I'm able to get response in PostMan.
But It's not working with Angular post method call.
So, I thought that it's may be a issue of Proxy. (Corrent me If i'm wrong).
So, My local Url is :- http://localhost:8080/MacromWeb/ws/login
So, How Can I make a proxy.conf.json file?
So for that I have added this code to my package.json file,
"start": "ng serve --proxy-config proxy.conf.json",
I have created a new file called proxy.conf.json.
And Put this code in it.
{
"/": {
"target": "http://localhost:8080/MacromWeb/ws",
"secure": false
}
}
Then I tried with ng serve and npm start both.
Postman Screenshot.
You can achieve this through proxy, You need to provide proper values in the proxy config.
/* should work too, but if MacromWeb is common in API URLs, then instead of / provide /MacromWeb/*
proxy.conf.json looks something like this,
{
"/MacromWeb/*": {
"target": {
"host": "localhost",
"protocol": "http:",
"port": 8080
},
"secure": false,
"changeOrigin": true,
"logLevel": "debug"
}
}
Hope it helps.
Say we have a server running on http://localhost:3000 and we want all calls to http://localhost:4200/api to go to that server.
In our proxy.conf.json file, we add the following content
{
"/api": {
"target": "http://localhost:3000",
"secure": false,
"pathRewrite": {
"^/api": ""
}
}
}
More on this: here

new composer-wallet - jszip error

I am making a new composer-wallet with composer 0.19.0
All test passed fine - test based on composer-wallet-filesystem
I can successfully import business network cards to the new wallet and use them for transactions.
I am only one issue
$ composer card list
Error: Can't find end of central directory : is this a zip file ? If it is, see http://stuk.github.io/jszip/documentation/howto/read_zip.html
Command failed
I tryed to update jszip to the lastest version in composer-cli, but same problem
Here is the environment variable to configure the connection
export NODE_CONFIG='{
"composer": {
"wallet": {
"type": "composer-wallet-mongodb",
"desc": "Uses a local mongodb instance",
"options": {
"uri": "mongodb://localhost:27017/yourCollection",
"collectionName": "myWallet",
"options": {
}
}
}
}
}'
Any help is welcomed

Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager

My cluster Config file as follows
`
{
"name": "SampleCluster",
"clusterConfigurationVersion": "1.0.0",
"apiVersion": "01-2017",
"nodes":
[
{
"nodeName": "vm0",
"iPAddress": "here is my VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "here is my another VPS ip",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
],
"properties": {
"reliabilityLevel": "Bronze",
"diagnosticsStore":
{
"metadata": "Please replace the diagnostics file share with an actual file share accessible from all cluster machines.",
"dataDeletionAgeInDays": "7",
"storeType": "FileShare",
"IsEncrypted": "false",
"connectionstring": "c:\\ProgramData\\SF\\DiagnosticsStore"
},
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"isPrimary": true
}
],
"fabricSettings": [
{
"name": "Setup",
"parameters": [
{
"name": "FabricDataRoot",
"value": "C:\\ProgramData\\SF"
},
{
"name": "FabricLogRoot",
"value": "C:\\ProgramData\\SF\\Log"
}
]
}
]
}
}
It is almost identical to standalone service fabric download demo file for untrusted cluster except my VPS ip. I enabled remote registry service.I ran the
\TestConfiguration.ps1 -ClusterConfigFilePath \ClusterConfig.Unsecure.MultiMachine.json but i got the following error.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Unable to change open service manager handle because 5
Unable to query service configuration because System.InvalidOperationException: Unable to change open service manager ha
ndle because 5
at System.Fabric.FabricDeployer.FabricDeployerServiceController.GetServiceStartupType(String machineName, String serv
iceName)
Querying remote registry service on machine <Another IP Address> resulted in exception: Unable to change open service manager
handle because 5.
Best Practices Analyzer determined environment has an issue. Please see additional BPA log output in DeploymentTraces
LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : False
FirewallAvailable :
RpcCheckPassed :
NoConflictingInstallations :
FabricInstallable :
DataDrivesAvailable :
Passed : False
Test Config failed with exception: System.InvalidOperationException: Best Practices Analyzer determined environment has
an issue. Please see additional BPA log output in DeploymentTraces folder.
at System.Management.Automation.MshCommandRuntime.ThrowTerminatingError(ErrorRecord errorRecord)
I don't understand the problem.VPSs are not locally connected. All are public IP.I don't know, this may b an issue. how do I make virtual LAN among these VPS?Can anyone give me some direction about this error?Anyone helps me is greatly appreciated.
Edit: I used VM term insted of VPS.
Finally I make this working. Actually all the nodes are in a network, i thought it wasn't. I enable file sharing. I try to access the shared file from the node where I ran configuration test to the all other nodes. I have to give the credentials of logins. And then it works like a charm.

Chronos can't run a private Docker container

I'm playing on localhost with a DC/OS installation. While everything works fine, I can't seem to run a docker image located inside a private repo. I'm using python to communicate with chronos:
#celery.task(name='add-job', soft_time_limit=5)
def add_job(job_id):
job_document = mongo.jobs.find_one({
'_id': job_id
})
if job_document:
worker_document = mongo.workers.find_one({
'_id': job_document['workerId']
})
if worker_document:
job = {
'async': True,
'name': job_document['_id'],
'owner': 'owner#gmail.com',
'command': "python /code/run.py",
"disabled": False,
"shell": True,
"cpus": worker_document['cpus'],
"disk": worker_document['disk'],
"mem": worker_document['memory'],
'schedule': 'R1//PT300S',# start now,
"epsilon": "PT60M",
"container": {
"type": "DOCKER",
"forcePullImage": True,
"image": "quay.io/username/container",
"network": "HOST",
"volumes": [{
"containerPath": "/images/",
"hostPath": "/images/",
"mode": "RW"
}]
},
"uris": [
"file:///images/docker.tar.gz"
]
}
return chronos_client.add(job)
else:
return 'worker not found'
else:
return 'job not found'
The job runs fine with a public image (alpine:latest) but it fails without any error inside the dcos installation.
The job gets executed but it fails immediately. The error log of the job inside chronos looks like this:
I1212 12:39:11.141639 25058 fetcher.cpp:498] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/61d6d037-c9f5-482b-a441-11d85554461b-S1\/root","items":[{"action":"BYPASS_CACHE","uri":{"cache":false,"executable":false,"extract":false,"value":"file:\/\/\/images\/docker.tar.gz"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/61d6d037-c9f5-482b-a441-11d85554461b-S1\/docker\/links\/7029bbea-4c3d-439a-8720-411f6fe40eb9","user":"root"}
I1212 12:39:11.143575 25058 fetcher.cpp:409] Fetching URI 'file:///images/docker.tar.gz'
I1212 12:39:11.143587 25058 fetcher.cpp:250] Fetching directly into the sandbox directory
I1212 12:39:11.143602 25058 fetcher.cpp:187] Fetching URI 'file:///images/docker.tar.gz'
I1212 12:39:11.143612 25058 fetcher.cpp:167] Copying resource with command:cp '/images/docker.tar.gz' '/var/lib/mesos/slave/slaves/61d6d037-c9f5-482b-a441-11d85554461b-S1/docker/links/7029bbea-4c3d-439a-8720-411f6fe40eb9/docker.tar.gz'
I1212 12:39:11.146726 25058 fetcher.cpp:547] Fetched 'file:///images/docker.tar.gz' to '/var/lib/mesos/slave/slaves/61d6d037-c9f5-482b-a441-11d85554461b-S1/docker/links/7029bbea-4c3d-439a-8720-411f6fe40eb9/docker.tar.gz'
Stdout is empty. Executed directly inside marathon as an application with the same settings the authentication works and my image is downloaded & executed. Is this something that chronos does not support? It should...I mean, it has commands for docker...
Update: digging deeper into the agent logs I found this:
Failed to run 'docker -H unix:///var/run/docker.sock pull quay.io/username/container': exited with status 1; stderr='Error: Status 403 trying to pull repository username/container: "{\"error\": \"Permission Denied\"}"
I tried the archive with it's config.json file on the agent itself and it can download when triggered from the command line. I just can't seem to understand why chronos is not using it properly. I can't find any other reference on how to put my credentials other than this.
As it turns out...the uris param is deprecated in favor of fetch. I started from scratch with a marathon config applied to chronos and watched the logs carefully when I saw this: {'message': 'Tried to add both uri (deprecated) and fetch parameters on aBPepwhG5z33e4teG', 'status': 'Bad Request'}. Then I changed my uris parameter into:
"fetch": [{
"uri": "/images/docker.tar.gz",
"extract": true,
"executable": false,
"cache": false
}]
...and it worked.
your post looked a little like this one, which turned out to be a problem with volumes.

Play framework app heroku configuration error: Key 'application.conf' may not be followed by token: 'application.prod.conf'

I built an app based on template play-silhouette-seed-slick. template link
I got a configuration error caused by com.typesafe.config.ConfigException$Parse after deploying the app to heroku.
"Configuration error: Configuration error[ # file:/app/target/universal/stage/conf/: 2: Key 'application.conf' may not be followed by token: 'application.prod.conf' (if you intended 'application.prod.conf' to be part of a key or string value, try enclosing the key or value in double quotes)]"
The Procfile
web: target/universal/stage/bin/panobike-plus-server -Dhttp.port=${PORT} -Dconfig.resource=${PLAY_CONF_FILE}
And app.json
{
"name": "play-silhouette-slick-seed",
"description": "Seed project to show how Silhouette can be implemented into a Play Framework application with database access using Slick 3.",
"keywords": [
"Play",
"Silhouette",
"Slick"
],
"website": "https://github.com/sbrunk/play-silhouette-slick-seed",
"repository": "https://github.com/sbrunk/play-silhouette-slick-seed",
"success_url": "/",
"env": {
"BUILDPACK_URL": "https://github.com/heroku/heroku-buildpack-scala.git",
"PLAY_CONF_FILE": "application.prod.conf",
"PLAY_APP_SECRET": "changeme",
"FACEBOOK_CLIENT_ID": "",
"FACEBOOK_CLIENT_SECRET": "",
"GOOGLE_CLIENT_ID": "",
"GOOGLE_CLIENT_SECRET": "",
"TWITTER_CONSUMER_KEY": "",
"TWITTER_CONSUMER_SECRET": ""
}
}
In my production config "application.prod.conf", there is no such key "application.conf".
What does this error message mean?
Thank you
It was a stupid question.
I did not call the https://api.heroku.com/app-setups endpoint to setup the app.json enabled application on Heroku.
I had the same error. That was due to lack of PLAY_CONF_FILE env variable. To fix this error you need to open Heroku web page -> Settings -> click on Config Vars button and set new PLAY_CONF_FILE variable. For example application.staging.conf

Resources