Where laravel Syslog are on windows - laravel

I'm running a local laravel APP with laragon, and I have this line of code:
Log::channel("syslog")->error('User ' . $request->email . ' logged in from IP ' . $ip);
After I search a little bit, I found that this logs goes to /var/log/syslog
But I'm on windows and with Laragon, so where this log goes?
I'm asking this, because I need to have logs in syslog format, so I'm thinking that I need to use syslog.
Its the any way to have logs in syslog format, but save them on laravel.log?

Related

Laravel SSH put files to remote server

I am trying to upload a file from my laravel project to a remote server.
$fileName = base_path().'/Storage/app/public/uploads/'.$file;
SSH::into('production')->put($fileName, './');
The result is a blank screen with no errors or anything and the file is not on the remote server. I know my ssh config is correct (keys and username/host stuff) because this works fine:
SSH::into('production')->run('ls', function($line)
{
echo $line.PHP_EOL;
});
What am i missing? What can i do to see any verbose logging of the SSH call?

Laravel octane not responding to request

Trying to run laravel octane. I do it as it is written in the documentation (I use a sail). A normal application without laravel works well, but after I set all the settings, rebuild the containers and run them, the application stops responding. All shops are empty. sudo lsof -i -P command | grep LISTEN gives output like this:
What could be the problem and how to find it?

How to find simple ip address of local server

This may be something very simple and obvious but I’m learning the command line. I’m trying to identify the IP address I need to load in a web browser to access my local server - for example, after entering these simple steps in the command line to create a folder with an index file:
mkdir www
nano index.html
Then running the server:
sudo python -m SimpleHTTPServer &
Displays this message: [1] 41749
What IP address do I need to load in a web browser to see the test index.html file? I’ve tried:
http://127.0.0.1/
http://localhost/
Also, entering hostname -i returns this message:
hostname: illegal option -- i
usage: hostname [-fs] [name-of-host]
Can anyone explain what’s going on here? Probably something very obvious. Also it feels that other commands aren’t working as usual as the ip addr command now returns -bash: ip: command not found.
Thanks for any help here.

Log file is changed automaticaly - PostgreSQL

I am starting PostgreSQL 11 Server with command line on Windows and I am trying to give the log file parameter, however when I start the server the log file is being changed to the default one that is assigned in the postgresql.conf with log_directory and log_filename settings.
I tried to delete the log_directory and log_filename data from postgresql.conf file, but it didn't work the log file is still being changed to the default one that was given in the old log_directory and log_filename values.
I am stoping the server every time to get the new data updated, and I am starting it with this command line:
"C:\Program Files\PostgreSQL11\bin\pg_ctl.exe" -D "C:\Program Files\PostgreSQL11\data\pg11" -w -o "-F -p 5423" -l "C:\Program Files\PostgreSQL11\data\logs\pg11\MY_LOG_FILE.log" start
I get this log message in my log file and after that the log messages will be saved in the old default log file:
2019-07-30 11:18:00 CEST [19996]: [4-1] user=,db=,app=,client= TIPP:
The further log output will appear in the directory
»C:/PROGRA~1/POSTGR~2/data/logs/pg11«
It is mentioned in the documentation:
pg_ctl encapsulates tasks such as redirecting log output and properly
detaching from the terminal and process group.
However, since nobody is having any idea about this issue, it looks like there is a difference between the log file that is passed to the executable and the log file from the postgresql.conf, the one that is passed to the executable is just to log data from the executable while it is starting the server, the other one from the config file is to log data from inside the server like when you execute a query, so the result that I have had makes sense now, and what I got is actually the normal behavior, but in this case the documentation should be fixed.
If this is not the case, and the pg_ctl should really redirect the server log output then this is a bug in PostgreSQL 11.4, just for you guys to know.

Cronjob not executing the Shell Script completely

This is Srikanth from Hyderabad.
I the Linux Administrator in one of the corporate company. We have a squid server, So i prepared a Backup squid server, so that when LIVE Squid server goes down i can put the backup server into LIVE.
My squid servers are configured with Centos 5.5. I have prepared a script to take backup of all configuration files in /etc/squid/ of LIVE server to the backup server. i.e It will copy all files from Live server's /etc/squid/ to backup server's /etc/squid/
Here's the script saved as squidbackup.sh in the directory /opt/ with permission 755(rwxr-xr-x)
#! /bin/sh
username="<username>"
password="<password>"
host="Server IP"
expect -c "
spawn /usr/bin/scp -r <username>#Server IP:/etc/squid /etc/
expect {
"*password:*"{
send $password\r;
interact;
}
eof{
exit
}
}
** Kindly note that this will be executed in the backup server that will check for the user which is mentioned in the script. I have created a user in the live server and given the same in the script too.
When i execute this command using the below command
[root#localhost ~]# sh /opt/squidbackup.sh
Everything works fine till now, this script downloads all the files from the directory /etc/squid/ of LIVE server to the location /etc/squid/ of Backup server
Now the problem raises, If i set this in crontab like below or with other timings
50 23 * * * sh /opt/squidbackup.sh
Dont know what's wrong, it is not downloading all files. i.e Cronjob is downloading only few files from /etc/squid/ of LIVE server to the /etc/squid/ of backup server.
**Only few files are downloaded when cron executes the script, If i run this script manually then it is downloading all files perfectly with out any errors or warnings.
If you have any more questions, Please go ahead to post it.
Now i kindly request to give if any solutions are available.
Please Please, Thank you in advance.
thanks for your interest. I have tried what you have said, it show like below, but previously i use to get the same output to mail of the User in the squid backup server.
Even in cron logs it show the same, but i was not able to understand what was the exact error from the below lines.
Please note that only few files are getting downloaded with cron.
spawn /usr/bin/scp -r <username>#ServerIP:/etc/squid /etc/
<username>#ServerIP's password:
Kindly check if you can suggest any thing else.
Try the simple options first. Capture the stdout and stderr as shown below. These files should point to the problem.
Looking at the script, you need to specify the location of expect. That could be an issue.
50 23 * * * sh /opt/squidbackup.sh >/tmp/cronout.log 2>&1

Resources