json requested via Ajax: queries are very fast but response is returned very slowly - laravel

Note this
It's a single ajax request.
As you can see, I wrote the duration in the result, it's the duration of all the queries executed in the api backend.
The Response length is 11 KByte, so it's not a response weight problem.
But as you can see the server is serving the page is 5 seconds.
I'm using nginx, and on this server (it's a single project dev VPS), there is NO trafic, no concurrency problems.
The backend is made in laravel 8 and it's doing only this:
$start = microtime(true);
$data = $this->articleRepository->getProducts($request->all());
$duration = microtime(true) - $start;
return response()->json([
'status' => 'success',
'data' => $data,
'debug' => [
'duration' => $duration
]
]);
I tried to replace laravel magics with
$json = json_encode([
'status' => 'success',
'data' => $data,
'debug' => [
'duration' => $duration
]
]);
return $json;
But it's taking same time. So I think that is a problem at server side.
By the way, please note that dev VPS is a debian 11 machine in my local network. We already verified that up/down band is well over 350Mbits/secs, symmetric, and stable.
I cannot diagnose it, I have root access to VPS, but I've no idea of what could causes so much slowness
Any idea?

In this very specific case, it was a question of DNS. "localhost" resolution are causing a series of problems and latencies. We moved all pointing from 'localhost' to 127.0.0.1 and ALL is resolved.
Note for future Googlers for Laragon and/or Xamp developers: we discovered accidentally that changing localhost to 127.0.0.1 when configuring redis in the laravel .env variabile fixes a huge amount on a windows machine when using both Laragon and Xamp

Related

Issues in uploading files to digital ocean spaces from laravel

It's almost 48 hours that I'm facing an issue with the files upload to Digital Ocean spaces from laravel and I can't make it work.
I've successfully built a livewire component that handle the multiple upload of images, each image is stored in the laravel local storage.
As on this website we plan to host thousands of different images, we have now decided to use Digital Ocean Spaces as storage disk.
In order to do so, as first I installed league/flysystem-aws-s3-v3 composer package as required.
Than in my config/filesystem.phpI've done the following :
'do_spaces' => [
'driver' => 's3',
'key' => env('DO_SPACES_KEY'),
'secret' => env('DO_SPACES_SECRET'),
'endpoint' => env('DO_SPACES_ENDPOINT'),
'region' => env('DO_SPACES_REGION'),
'bucket' => env('DO_SPACES_BUCKET'),
],
And in my .env the following :
DO_SPACES_KEY=key
DO_SPACES_SECRET=secret
DO_SPACES_ENDPOINT=https://ams3.digitaloceanspaces.com
DO_SPACES_REGION=AMS3
DO_SPACES_BUCKET=bucket_name
In my livewire component, after uploading the image to local storage, for each image I'm dispatching a job(dispatch(new ResizeUploadToDo($pic, $extension));) which is supposed to :
Retrieve the image from local storage
Pass it to Intervention library in order to resize it and apply a watermark.
Upload the manipulated Image to Digital Ocean Spaces
Remove the old image from local storage
This is the code I've written so far in my job handle() method :
public function handle()
{
$path = Storage::disk('local')->path($this->pic->storage_file_path);
$img=Image::make($path);
$img->resize(600, 600);
$folderStructure = Carbon::now()->format('Y') . '/' . Carbon::now()->format('m') . '/' . Carbon::now()->format('d'). '/';
$filename=time().'.'.$this->extension;
Storage::disk('do_spaces')->put($folderStructure . $filename, $img, 'public');
}
The issues that I'm now facing are the following :
If i try to dd($img) right after instanciate it with Intervention
library, i get the following :
Intervention\Image\Image {#1570 ▼ // app\Jobs\Images\ResizeUploadToDo.php:49
#driver: Intervention\Image\Gd\Driver {#1578 ▼
+decoder: Intervention\Image\Gd\Decoder {#1598 ▼
-data: null
}
+encoder: Intervention\Image\Gd\Encoder {#363 ▼
+result: null
+image: null
+format: null
+quality: null
}
}
#core: GdImage {#394 ▼
+size: "600x600"
+trueColor: true
}
#backups: []
+encoded: ""
+mime: "image/png"
+dirname: "C:\Users\Gianmarco\wa\hotelista\storage\app22/12/12"
+basename: "SPcNL5FD3OZV4heHWA103J4n5YU8xOCG1SU7pyMd.png"
+extension: "png"
+filename: "SPcNL5FD3OZV4heHWA103J4n5YU8xOCG1SU7pyMd"
}
To me it seems like the retrieved Image is empty, is it correct? Or how should i do to correctly retrieve the image from local storage?
I've noticed that, If the job runs with queue driver sync, files
get uploaded to Digital Ocean Spaces but they are empty (file size:
0MB); while, if the job runs with queue driver 'database' files are not uploaded at all to Digital Ocean Spaces.
Doea anybody know how to solve this matter?
Hope somebody can help me with it.
Thank you
Add this variable 'url' => env('DO_SPACES_URL') below 'bucket' => env('DO_SPACES_BUCKET'), in filesystems.php.
In your .env put the variable DO_SPACES_URL=https://{your bucket name}.ams3.digitaloceanspaces.com
Upload file like this:
Storage::disk('do_spaces')->put('uploads', $request->file('file'), 'public');

How to Fix Client Error: file_get_contents(): in Cpanel with Laravel Project inside

i have problem when do seed in my laravel project in cpanel.
this is the errors
Client Error: file_get_contents(): https:// wrapper is disabled in the server configuration by allow_url_fopen=0
at vendor/kavist/rajaongkir/src/HttpClients/BasicClient.php:74
70▕
71▕ private function executeRequest(string $url): array
72▕ {
73▕ set_error_handler(function ($severity, $message) {
➜ 74▕ throw new BasicHttpClientException('Client Error: '.$message, $severity);
75▕ });
76▕
77▕ $rawResponse = file_get_contents($url, false, $this->context);
Please Someone help me
this is my LocationsTableSeeder.php
public function run()
{
$daftarProvinsi = RajaOngkir::provinsi()->all();
foreach ($daftarProvinsi as $provinceRow) {
Province::create([
'province_id' => $provinceRow['province_id'],
'nama' => $provinceRow['province'],
]);
$daftarKota = RajaOngkir::kota()->dariProvinsi($provinceRow['province_id'])->get();
foreach ($daftarKota as $cityRow) {
Kabupaten::create([
'province_id' => $provinceRow['province_id'],
'city_id' => $cityRow['city_id'],
'nama' => $cityRow['city_name'],
'type' => $cityRow['type'],
'postal_code' => $cityRow['postal_code'],
]);
}
}
}
It's a good practice to disable file_get_contents ability to open remote URLs (like the ones starting HTTP) on shared servers (that frequently use Cpanel) to avoid the download/injection of malicious scripts in your server.
Go to Cpanel PHP options and enable allow_url_fopen, as pointed by apokryfos, usually it's at Switch To PHP Options menu. Some providers will not allow this change via Cpanel and you might need to open a support ticket.
Usually, this option cannot be changed by ini_set or via the PHP script itself in any other way.

How to manage data at DO Spaces with Laravel Storage

I am trying to manage DO's Spaces with Laravel's 8 Storage, however I am getting errors which seems to come from Laravel's side.
At start I wrote this line in terminal as I was instructed in Laravel's documentation
composer require league/flysystem-aws-s3-v3 "~1.0"
afterwards I edited my environmental variables
DO_SPACES_KEY=*KEY*
DO_SPACES_SECRET=*SECRET*
DO_SPACES_ENDPOINT=ams3.digitaloceanspaces.com
DO_SPACES_REGION=AMS3
DO_SPACES_BUCKET=test-name
also added changes in config/filesystems.php
'do_spaces' => [
'driver' => 's3',
'key' => env('DO_SPACES_KEY'),
'secret' => env('DO_SPACES_SECRET'),
'endpoint' => env('DO_SPACES_ENDPOINT'),
'region' => env('DO_SPACES_REGION'),
'bucket' => env('DO_SPACES_BUCKET'),
],
After visiting this test Route
Route::get('/test', function (Request $request) {
Storage::disk('do_spaces')->put('test.txt', 'hello world');
});
I am getting this error
Error executing "PutObject" on "//test-name./test-name/test.txt"; AWS HTTP error: cURL error 6: Couldn't resolve host 'test-name' (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://test-name./test-name/test.txt
It seems that problem occurs while laravel is trying to compile url which should not look as it is here (wrong - http://test-name./test-name/test.txt). However I have no clue how to fix this issue and what I am doing wrong, since I was following all steps as many tutorials and documetations were telling to do.
I had the same problem. I solved it next way:
Add https:// to DO_SPACES_ENDPOINT (https://ams3.digitaloceanspaces.com)
In put method use path to text.txt:
Storage::disk('do_spaces')->put('YOUR_SPACE_NAME/YOUR_FOLDER_NAME(if you have)/test.txt', 'hello world');

CakePHP 3.x ORM doesn't use cache data when key exists

CakePHP 3.5.13 with Redis configured as the cache engine:
// config/app.php
'Cache' => [
'default' => [
'className' => 'Redis',
'duration' => '+1 hours',
'prefix' => 'cake_redis_',
'host' => '127.0.0.1',
'port' => 6379,
],
];
I have a table with ~260,000 rows in it and a corresponding Table class called SubstancesTable.php. I'm attempting to get the first 5000 rows and then cache the results, so that on subsequent queries, the cached results are used rather than executing the same query:
// Controller method
public function test()
{
$this->autoRender = false;
$Substances = TableRegistry::get('Substances');
// Get 5000 rows from table
$query = $Substances->find('list')->limit(5000);
// Write to cache
$query->cache('test_cache_key');
// Output the results
debug($query->toArray());
}
When I login to Redis (running redis-cli through ssh on my webserver), I can see a key has been generated with the name "test_cache_key":
127.0.0.1:6379> KEYS *
1) "cake_redis_test_cache_key"
I can also see the serialized data in there using GET cake_redis_test_cache_key.
When I execute the above in a browser, there is virtually no difference in the time taken between the cache not existing, and after the cache has been created. I have deleted the cached key in Redis using DEL cake_redis_test_cache_key and confirmed it has gone by listing the keys in Redis (KEYS *)
Clearly Cake isn't reading from the cache in this situation, even though it's writing to it without problems. Why is this happening?
The documentation (https://book.cakephp.org/3.0/en/orm/query-builder.html#caching-query-results) is not clear. Do I need to do something else to get it to read the results from the cache? I've also read CakePHP 3: find() with cache but can't see what's being done differently to what I'm doing above.

Stripe payment in laravel

I have been trying to integrate stripe payment in laravel.
But when i try to create a customer in stripe,it won't created.
Here is my code,
try
{
$customer = \Stripe\Customer::create(array(
'email' => $this->user_email,
'card' => $this->stripe_token
));
echo "<pre>";
print_r($customer);exit;
$charge = \Stripe\Charge::create(array(
'customer' => $customer->id,
'amount' => $amountstripe,
'currency' => 'EUR'
));
}
I got an error message like,
Could not connect to Stripe (https://api.stripe.com/v1/customers). Please check your internet connection and try again. If this problem persists, you should check Stripe's service status at https://twitter.com/stripestatus, or let us know at support#stripe.com.
(Network error [errno 6]: Could not resolve host: api.stripe.com)
How can i resolve this problem?
This is not an error with Laravel or Stripe, but "simply" a network error. Your server cannot send requests to Stripe's API because it cannot resolve the hostname api.stripe.com to an IP address.
You'd need to get in touch with your network administrator or hosting provider and ask them to check the connectivity between your server and Stripe's. The following script may be helpful in diagnosing the issue: https://github.com/stripe/stripe-reachability#stripe-reachability

Resources