For the past three to four months, we have our application live and running, we haven't deployed any new fixes / changes on Live. However ever unfortunately, we noticed that application has stopped running.
Following is the issue we observed from our logs :
"Can't create/write to file '/var/tmp/#sql_2f6_0.MYI" .
It would be really appreciable if anyone of you can extend your help.
Check the services and the User for which your Mysql is giving you this error. It is very much possible that any of the services might be down, or the User by which you are using the DB is not getting authenticated.
You or the user that handles your SQL service doesn't have permission to /var/tmp/. You can fix this by using chmod or Security permissions, depending on which platform you're on.
Related
I try to open my knowledge base and for an hour, I receive a message that indicate this :
I checked my azure search service and restart the webapp , that's ok, it's running fine.
I have supposed that was because I reached the limit of my plan. I upgraded my plan from a free one (f1) to one with better limits (b1). That do nothing.
Could you help me understand or to find where i'm wrong ?
I finally found out what was my error. It was because i have published by mistake, the bot's files in the same directory than the qnamaker's knowledge base...
I could see this with the Cloud Explorer or the tab App Service explorer in Azure portal.
I've spent 3 days beating my head against this before coming here in desperation.
So long story short I thought I'd fire up a simple PHP site to allow moderators of a gaming group I'm in the ability to start GCP servers on demand. I'm no developer so I'm looking at this from a Systems perspective to find the simplest solution to do the job.
I fired up an Ubuntu 18.04 machine on GCP and set it up with the Google SDK, authorised it for access to the project and was able to simply run gcloud commands which worked fine. Had some issues with the PHP file calling the shell script to run the same commands but with some testing I can see it's now calling the shell script no worries (it broadcasts wall "test") to console everytime I click the button on the PHP page.
However what does not happen is the execution of the gcloud command. If I manually run this shell script it starts up the instance no worries and broadcasts wall, if I click the button it broadcasts but that's it. I've set the files to have execution rights and I've even added the user nginx runs as to have sudo rights, putting sudo sh in front of the command in the PHP file also made no difference. Please find the bash script below:
#!/bin/bash
/usr/lib/google-cloud-sdk/bin/gcloud compute instances start arma3s1-prod --zone=australia-southeast1-b
wall "test"
Any help would be greatly appreciated, this coupled with an automated shut down would allow our gaming group to save money by only running the servers people want to play on.
Any more detail you want about the underlying system please let me know.
So I asked a PHP dev at work about this and in two seconds flat she pointed out the issue and now I feel stupid. In /etc/passwd the www-data user had /usr/sbin/nologin and after I fixed that running the script gcloud wanted permissions to write a log file to /var/www. Fixed those and it works fine. I'm not terribly worried about the page or even server being hacked and destroyed, I can recreate them pretty easily.
Thanks for the help though! Sometimes I think I just need to take a step back and get a set fresh of eyes on the problem.
When you launch a command while logged in, you have your account access rights to the Google cloud API but the PHP account doesn't have those.
Even if you add the www-data user to root, that won't fix the problem, maybe create some security issues but nothing more.
If you really want to do this you should create a service account and giving the json to the env variable, GOOGLE_APPLICATION_CREDENTIALS, which only have the rights on the compute instance inside your project this way your PHP should have enough rights to do what you are asking him.
Note that the issue with this method is that if you are hacked there is a change the instance hosting your PHP could be deleted too.
You could also try to make a call to prepared cloud function which will create the instance, this way, even if your instance is deleted the cloud function would still be there.
For some reason i just can't get an Amazon Aurora DB launched. I haven't launched one before but have read many Amazon help / instruction pages. Launching other Amazon products did work well after some digging. This one just doesn't. Any suggestions?
Error:
Access denied to Performance Insights (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: 8ef6c7b9-be54-4bd8-aa87-XXXXXXXX)
http://prntscr.com/iug951
Today it works.. selected the same settings as yesterday. All i did different was omit dashes (-) from the database name and other stuff you have to name. If that was the actual cause of the 3h headache yesterday Amazon really sucks it just doesn't tell you that instead of showing a cryptic error message.
I just had the same issue with the same error message, restarting the setup process from the start (with a database name without a dash in it), fixed the issue.
I’ve inherited a Laravel 5.3 application that does not appear to be logging web processes or anything else on the server-side in my development environment. Here’s the things I’ve tried/confirmed.
Set APP_DEBUG = true
storage/logs exists and all users have read/write/execute permissions
I’ve created an empty laravel.log file, thinking it needs to exist before it can be written to. I’ve also run the app without that file.
FWIW, this app is running in a vagrant instance and has debugger bar installed.
Any thoughts on what is going on here or something I can try to get logging started?
Thanks.
I found it hiding in the vagrant instance here: /var/log/nginx
That solved, I'd still be grateful for any insight or resource as to how or why that's configured. Knowing this and searching within the project and combing through the Vagrantfile still doesn't shed light on why it's being saved there rather than storage/logs.
I have a CodeIgniter setup that has been running fine for the past 2 months and recently I keep getting:
CodeIgniter error- unable to connect to database using the provided settings
I've recently added a new domain that has a landing page for the database login (zPanel), but I don't see how that could have caused a problem--maybe the page keeps getting directory attacked or something, but I'm not sure.
Is there a way to check if this is the problem through logs? I'm at dead ends with this problem, as when I restart the server (DigitalOcean) it works fine again.
Really not sure. If anyone else has had a similar problem, I'd love to hear your solution.
Thanks.
I think your mysql is going down so Codeigniter can't connect to your database settings.
Please login to SSH and check processes by "TOP" comment. See what is using resources ram or cpu.
And check your mysql conf settings, be sure that everything written if its empty it will cause alot of problems.
Some example :
http://www.maxwhale.com/how-to-optimize-mysql-for-1gb-memory-vps/