Laravel Scheduler creating thousands of log files in tmp folder - laravel

I have a Digital Ocean Ubuntu 16.04 server running Laravel.
I have a couple crons running every minute in the scheduler that trigger the Quickbooks API library. Everytime it runs it logs a text file similar to below. It creates a request and response txt file:
RESPONSE URI FOR SEQUENCE ID 04745
==================================
https://quickbooks.api.intuit.com/v3/company/123456/query?minorversion=54
RESPONSE HEADERS
================
date: Tue, 15 Dec 2020 18:15:51 GMT
content-type: application/xml;charset=UTF-8
content-length: 1193
connection: close
server: nginx
strict-transport-security: max-age=15552000
intuit_tid: 1-5fd8fd57-123456
x-spanid: d59bb673-e981-4d61-9bdf-123456
x-amzn-trace-id: Root=1-5fd8fd57-123456
set-cookie: JSESSIONID=123456.c21-pprdc21uw2apv019661-stack-b; Domain=qbo.intuit.com; Path=/; Secure; Ht$
qbo-version: 1949.239
service-time: total=8, db=3
expires: 0
cache-control: max-age=0, no-cache, no-store, must-revalidate, private
vary: Accept-Encoding
x-xss-protection: 1; mode=block
RESPONSE BODY
=============
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<IntuitResponse xmlns="http://schema.intuit.com/finance/v3" time="2020-12-15T10:15:51.199-08:00">
<QueryResponse startPosition="1" maxResults="1">
<Vendor domain="QBO" sparse="false">
<Id>6213</Id>.....
I don't need these logs created. How can I stop these from being created? I am not sure if they are being created from Ubuntu, Laravel or the Quickbooks API.
The bandaid is that I have a cron to remove these files from a shell script but I am trying to stop them from being generated in the first place. Thanks

It looks like a response from QuickBooks API. You can check here or here to find out how to disable or modify response log.

Related

wget gives 403 on accessible files

First time poster with a bizarre issue I am having. I usually install software through conda, but from one moment to the other I stopped being able to use conda install because of a 403 I get from conda trying to access some configuration files. When trying to download those files with wget --spider --debug https://conda.anaconda.org/anaconda/noarch/current_repodata.json, I get the same 403 error.
DEBUG output created by Wget 1.19.4 on linux-gnu.
Reading HSTS entries from /home/jsequeira/.wget-hsts
URI encoding = ‘UTF-8’
Converted file name 'current_repodata.json' (UTF-8) -> 'current_repodata.json' (UTF-8)
Spider mode enabled. Check if remote file exists.
--2020-07-30 11:25:59-- https://conda.anaconda.org/anaconda/noarch/current_repodata.json
Resolving conda.anaconda.org (conda.anaconda.org)... 104.17.92.24, 104.17.93.24, 2606:4700::6811:5d18, ...
Caching conda.anaconda.org => 104.17.92.24 104.17.93.24 2606:4700::6811:5d18 2606:4700::6811:5c18
Connecting to conda.anaconda.org (conda.anaconda.org)|104.17.92.24|:443... connected.
Created socket 5.
Releasing 0x000056545deb1850 (new refcount 1).
Initiating SSL handshake.
Handshake successful; connected socket 5 to SSL handle 0x000056545deb2700
certificate:
subject: CN=anaconda.org,O=Cloudflare\\, Inc.,L=San Francisco,ST=CA,C=US
issuer: CN=Cloudflare Inc ECC CA-3,O=Cloudflare\\, Inc.,C=US
X509 certificate successfully verified and matches host conda.anaconda.org
---request begin---
HEAD /anaconda/noarch/current_repodata.json HTTP/1.1
User-Agent: Wget/1.19.4 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: conda.anaconda.org
Connection: Keep-Alive
---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 403 Forbidden
Date: Thu, 30 Jul 2020 11:25:59 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
CF-Chl-Bypass: 1
Set-Cookie: __cfduid=d3cd3a67d3926551371d8ffe5a840b04f1596108359; expires=Sat, 29-Aug-20 11:25:59 GMT; path=/; domain=.anaconda.org; HttpOnly; SameSite=Lax
Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Expires: Thu, 01 Jan 1970 00:00:01 GMT
X-Frame-Options: SAMEORIGIN
cf-request-id: 044111dd9600005d4732b73200000001
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Vary: Accept-Encoding
Server: cloudflare
CF-RAY: 5baeb8dc2ba65d47-LIS
---response end---
403 Forbidden
cdm: 1
Stored cookie anaconda.org -1 (ANY) / <permanent> <insecure> [expiry 2020-08-29 11:25:59] __cfduid d3cd3a67d3926551371d8ffe5a840b04f1596108359
URI content encoding = ‘UTF-8’
Closed 5/SSL 0x000056545deb2700
Remote file does not exist -- broken link!!!
These files are accessible through the browser, and were always accessible with wget and conda until yesterday, when I was installing some tools not related to these network accesses. How can wget fail to download them?
So this was fixed by reinstalling apt-get. Some configuration file there must have been messed up.

NGINX + HHVM + Magento

I've been working on a development box deploying an Nginx, Magento, HHVM model. The site loads nicely and I don't get any real clear errors that I can see. However, a good number of my images are not loading properly. All I get are the image dimensions. The version of HHVM I'm using is:
HipHop VM 3.3.0-dev (rel)
Compiler: heads/master-0-g39d07abf36f7cad883d6be2a384a77c0c8aac040
Repo schema: 248cb6ce2d336c890fb26fd16690bf1b53b46994
Extension API: 20140829
The version of Nginx is:
nginx version: nginx/1.7.4
Some images load, and others don't. We're using timthumb for images, and I do see one error:
\nWarning: Is a directory in /home/user/shared/timthumb/index.php on line 90
If I run a curl from the command-line, I get the following result for an image, which means it's being seen as far as I can tell:
HTTP/1.1 200 OK
Content-Type: image/gif
Connection: keep-alive
Vary: Accept-Encoding
X-Powered-By: HHVM/3.3.0-dev
CF-Cache-Status: HIT
CF-RAY: 16513733df33085c-IAD
Server: cloudflare-nginx
Date: Fri, 05 Sep 2014 08:56:47 GMT
Expires: Sun, 05 Oct 2014 08:56:47 GMT
Cache-Control: public, max-age=2592000
Set-Cookie: __cfduid=d6c188df6595a443200995301f0b707981409907407975; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.placehold.it; HttpOnly
Last-Modified: Fri, 19 Nov 2010 07:00:00 GMT
Cache-Control: max-age=31536000, public, must-revalidate, proxy-revalidate
I have a separate test server with the same content/database running NGINX/PHP-FPM, and the only variable I can see that's different in the curl is that when I run the curl against the dev server running php-fpm, the result is Content-Type: image/jpeg. When I run it against HHVM I get Content-Type: image/gif.
I would think this is related to rewrite rules, but I'm not sure. I also just discovered this error here as well in exception.log:
2014-09-05T12:04:06+00:00 ERR (3): Warning: open(tcp://127.0.0.1:11211?persistent=1&weight=2&timeout=10&retry_interval=10/sess_dd885307c2ba602e5faa7020c9c7c038, O_RDWR) failed: No such file or directory (2) in on line 0
Any advice would be greatly appreciated.

Drive Realtime API no longer returns realtime document on localhost

I have been calling the following gapi javascript function with great success for a few months:
gapi.drive.realtime.load(fileId,
successHandler,
initializer,
errorHandler);
Suddenly, at 1:30 PM CDT today, that call stopped working when run in javascript on localhost. I can deploy the exact same code to my server and it works perfectly!
Frustratingly, none of the callbacks are called - not successHandler OR errorHandler.
I have localhost:3000 set as an allowed javascript origin in my Google API Console project, and anyway I haven't changed any settings there since this was working. I am correctly authorized and can make REST calls to the Drive API without an issue.
Has anyone else seen this behavior suddenly? Can anyone from the Google team make a suggestion?
Update: the request inspector shows a GET to
https://drive.google.com/otservice/gs?access_token=[ommitted-for-stackoverflow]&id=[also-omitted]
with the response
)]}'
["17AKDsTY8kHESKfQavrHeh3YybD5k4b6ty8CQ78MHtyc","724b79b808d48070",false,1,[1,""],[0,[28,"724b79b808d48070","110581799581534438628",false,true,"REL DEV","#58B442","https://lh3.googleusercontent.com/-XdUIqdMkCWA/AAAAAAAAAAI/AAAAAAAAAAA/4252rscbv5M/s128/photo.jpg"]]]
The headers are
HTTP/1.1 200 OK
status: 200 OK
version: HTTP/1.1
access-control-allow-origin: *
access-control-expose-headers: Content-Length,Content-Type,X-Restart
alternate-protocol: 443:quic
cache-control: no-cache, no-store, max-age=0, must-revalidate
content-disposition: attachment; filename="json.txt"; filename*=UTF-8''json.txt
content-encoding: gzip
content-type: application/json; charset=utf-8
date: Fri, 04 Apr 2014 22:15:40 GMT
expires: Fri, 01 Jan 1990 00:00:00 GMT
pragma: no-cache
server: GSE
vary: Origin
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-restart:
x-xss-protection: 1; mode=block
There are no other requests after that.

Passenger Unknown Reason-Phrase

When I try to access my site throught main domain (example.com) I get message like below. But it doesn't happen if I access the my site through a subdomain. I'm using Passenger with Nginx. Any ideas on how I can fix this? Thanks!
HTTP/1.1 16797828 Unknown Reason-Phrase
Status: 16797828 Unknown Reason-Phrase
Content-Type: text/html;charset=utf-8
Content-Length: 0
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Powered-By: Phusion Passenger 4.0.20
Date: Mon, 21 Oct 2013 09:06:42 GMT
It's because your app returned an invalid HTTP response code (namely '16797828'). You should fix your app not to do that.

Magento Soap API working locally but not remotely

I have created a custom API for Magento Enterprise 1.11. Calling the API through Soap v1 works fine on my local dev environment, however I am unable to make calls from my local environment to the remote environment.
Using PHP interactive shell on my localdev:
php > $client = new SoapClient(WSDL_URI,array('trace'=>1));
php > $client->login(API_USER,API_KEY);
php > var_dump($client->__getLastResponse());
string(538) "<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="urn:Magento" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"><SOAP-ENV:Body><ns1:loginResponse><loginReturn xsi:type="xsd:string">f0eec73e49665aaf9cc4a6644fba5dc6</loginReturn></ns1:loginResponse></SOAP-ENV:Body></SOAP-ENV:Envelope>
I have been able to do this successfully from the localhost, as well as between two local VMs running on my dev machine. I can also access the methods of my custom API without issue.
However, when I try to make a soap client to my remote test environment, I am able to create the client, but the call to $client->login(), or any subsequent call results in the following:
php > $client = new SoapClient(REMOTE_WSDL_URI,array('trace'=>1));
php > $client->login(API_USER,API_KEY);
PHP Warning: Uncaught SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://REMOTE_HOST/index.php/api/index/index/wsdl/1/' : failed to load external entity "http://REMOTE_HOST/index.php/api/index/index/wsdl/1/" in php shell code:1
Stack trace:
#0 php shell code(1): SoapClient->__call('login', Array)
#1 php shell code(1): SoapClient->login(API_USER, API_KEY)
#2 {main}
php > var_dump($client->__getLastRequestHeaders());
string(255) "POST /index.php/api/index/index/ HTTP/1.1
Host: REMOTE_HOST
Connection: Keep-Alive
User-Agent: PHP-SOAP/5.3.18-1~dotdeb.0
Content-Type: text/xml; charset=utf-8
SOAPAction: "urn:Mage_Api_Model_Server_HandlerAction"
Content-Length: 550
php > var_dump($client->__getLastResponseHeaders());
string(840) "HTTP/1.1 500 Internal Service Error
Date: Mon, 11 Feb 2013 19:06:56 GMT
Server: Apache/2.2.16 (Debian)
X-Powered-By: PHP/5.3.19-1~dotdeb.0
Set-Cookie: PHPSESSID=7uqrcmiv96hroubnb1uu7c7cm6; expires=Wed, 13-Feb-2013 01:06:56 GMT; path=/; domain=.REMOTE_HOST; HttpOnly
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Set-Cookie: CUSTOMER=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.REMOTE_HOST; httponly
Set-Cookie: CUSTOMER_INFO=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.REMOTE_HOST; httponly
Set-Cookie: CUSTOMER_AUTH=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/; domain=.REMOTE_HOST; httponly
Content-Length: 468
Vary: Accept-Encoding
Connection: close
Content-Type: text/xml; charset=utf-8
php > var_dump($client->__getLastResponse());
string(468) "<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"><SOAP-ENV:Body><SOAP-ENV:Fault><faultcode>WSDL</faultcode><faultstring>SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://REMOTE_HOST/index.php/api/index/index/wsdl/1/' : failed to load external entity "http://REMOTE_HOST/index.php/api/index/index/wsdl/1/"
</faultstring></SOAP-ENV:Fault></SOAP-ENV:Body></SOAP-ENV:Envelope>
When I hit //REMOTE_HOST/index.php/api/?wsdl I get the standard Magento WSDL.
The two environments are 99.99% identical:
Server version: Apache/2.2.16 (Debian) (both local dev and remote)
PHP 5.3.18 (local dev) 5.3.19 (remote host)
Apache/PHP configurations are the same.
Code base is identical
I have scoured the intewebs for clues, including:
http://www.magentocommerce.com/boards/viewthread/56528/
http://www.magentocommerce.com/wiki/5_-_modules_and_development/web_services/overriding_an_existing_api_class_with_additional_functionality#wsdl
Unable to connect to Magento SOAP API v2 due to "failed to load external entity"
Magento API SOAP-ERROR: Parsing WSDL: Couldn't load from '[url]/index.php/api/index/index/?wsdl=1' : Couldn't find end of Start Tag part line 56
http://www.magentocommerce.com/api/soap/introduction.html
I've tried the "Content-Length" header fix mentioned in the sedond-to-last link, and just about everything else I could think of... Stumped.
While you can load the WSDL URL (http://REMOTE_HOST/index.php/api/index/index/wsdl/1/) from your computer, your remote server can't contact itself via its REMOTE_HOST.
PHP's SoapServer object (used by Magento's implementation) needs to contact the WSDL to know which methods are exposed.
For reasons I've never been able to figure out, it's a common network configuration for a server to not have access to it's own DNS entries. Connect to your server via SSH and try running the following
curl http://REMOTE_HOST/index.php/api/index/index/wsdl/1/
My guess is you'll get a network timeout or a REMOTE_HOST unknown error. Fix your configuration so your server can access itself, and everything should start working.
You could try changing host DNS nameservers perhaps.
vim /etc/resolv.conf to add Google's 8.8.8.8, 8.8.4.4

Resources