parse server 502 bad gateway after 1st request - parse-platform

I've made some research (googled and search stackoverflow) for this issue.
My local parse works perfectly, but when I'm using pm2 on hosting, I'm having 502 bad gateway after doing successful 1st request to Parse.
How to resolve this issue? Thank you.

Very simple fix i think, because its a common fault with PM2 and parse-server. When you print the PM2 logs you will see that there is restart taking place due to PM2 wanting to write to the parse-server logs.
To avoid that, you can totally disable this behavior (watch: false) or just ignore the logs and hidden PM2 directories via ignore_watch.
eg:
"watch" : true,
"ignore_watch" : ["node_modules", "logs"]
Then restart parse-server and you should be good to go.
If its not that,then set VERBOSE = 1 and print the logs and post it here.

Related

How to troubleshoot DDEV DB container healthcheck timeout

When i want to start my DDEV Project an Container stucks at creating
Container ddev-oszimt-lf12a-v2-db Started
Error Message:
Failed waiting for web/db containers to become ready: db container failed: log=, err=health check timed out after 2m0s: labels map[com.ddev.site-name:oszimt-lf12a-v2 com.docker.compose.service:db] timed out without becoming healthy, status=
Its an Error i also had with some other projects.
In the Error Log is no information about this.
What could the Problem be and how do i fix it?
This isn't a very good place to study problems with specific projects, our Discord Channel is much better, or the DDEV issue queue.
But I'll try to give you some ideas about how to study and debug this.
Go to the Troubleshooting section of the docs. Work through it step-by-step.
As it says there, try the simplest possible project and see what the results are.
If the problem is particular to one particular project, see if you can remove customizations like .ddev/docker-compose.*.yaml files and config.*.yaml and non-standard things in the config.yaml file.
To find out what the causes the healthcheck timeout, see the docs on this exact problem, in your case the db container is timing out. So first, ddev logs -s db to see if something happened, and second docker inspect --format "{{json .State.Health }}" ddev-<projectname>-db.
For more help, you'll need to provide more information with things like your OS, Docker Provider, etc, and the easiest way to do that is to run ddev debug test and capture the output and put it in a gist on gist.github.com, then come over to discord with a link to that.

How to stop supabase which is running on localhost 3000?

I wanted to know what supabase is so I installed it using this local development guide enter link description here
it was few weeks back, i was simply checking port 3000 and supabase is running i have removed all supabase related folder but still its running. can someone help me understand why its still running and how to stop it.
To stop running supabase, you can use the CLI stop command:
supabase stop
Deleting folders without stopping will keep the current instance running.
You can always use htop (linux) and of follow this guide (macOS) to stop the process.
Hi guys i found the solution. it was cache in the browser which was loading. I deleted history, cache and cookies. now if i open my browser and go to localhost:3000 i can see 'This site can’t be reached' message.
thanks #Mansueli and #ahmad

Sonarqube API server/index doesnt show correct status

I'm trying to use the Sonarqube API to check the status of the running instance for healthcheck purposes. The API call I'm using for that is /api/server/index. This shows the version and the status.
However, when I stop the database that the instance is connected to, the server still shows status UP. I feel like it should show DOWN instead.
Has anybody else had experiences with this?
Thanks for reporting this issue, I've created a JIRA ticket to fix it : https://jira.sonarsource.com/browse/SONAR-7001

Error while registering Public key in Openshift

I'm getting an error while add my public key & it showing this error code: "
Error updating settings on gear: 01779aa6c3e04c71be82fbaa10662fcf with status: -1 and output: "
Any idea that why this is showing everytime when I am registering public key.....//
We believe this is a problem that arose in our most recent update on tuesday and are now investigating. When you add an SSH key we copy it to each of your applications (so git will work) - it looks like the copy process started failing.
EDIT: We fixed an issue in production that was affecting a small number of users that resulted in this symptom. Please let us know if the issue is not fixed, and we'll investigate further.
I am getting the same error. Looks like its their internal server problem..
EDIT: it seems you can't put security on applications that are available for test in openShift. It makes sense. Remove the test applications that you got from OpenShift.
I got it solved. That number {01779aa6c3e04c71be82fbaa10662fcf} is an application you currently have in your domain. I removed all applications in there. Have backups first, then clean your domain. Update your public key again and I am 100% sure it will work.
Please do this with care. Back up your application first. To remove
rhc app destroy -a {appName} -d
It's was my silly mistake, just I have to add my private key at the time of authentication in Putty.

Getting Heroku logs for past few weeks

I'm trying to get the production logs for the past few weeks off of heroku but when I do heroku logs, it just returns a few lines showing the production log for today.
Any way to get heroku logs for the past few weeks?
Thanks.
Any number up to 500 lines can now be retrieved using the -n flag.
heroku logs -n 420
As well you can also run:
heroku logs -t
And let that run for a while.
EDIT: And you can use third party tools like papertrail. See : papertrail link
(Correcting my own old response) Previously, Heroku only provided you with access to the last 100 lines. Now this limit apppears to have been raised.
There's also this pretty cool sounding logentries addon, with generous free offerings.
Not sure about going back given their limitations - but going forward you can forward your logs to an external syslog server.
Syslog drains - Premium Add-on

Resources