Laravel S3 File get contents - laravel

Inside of my Laravel application inside of my job class I have the following code. On my live server this code runs just fine however on my local I get an error and not sure what I need to do to fix this problem. Has anyone been able to solve this with using the AWS S3 file driver for Laravel?
Storage::disk('s3')->put($path, file_get_contents($this->url), 'public');
file_get_contents(http://webapp.dev/storage/uploads/folder/folder/folder/imagename.jpeg): failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found

Do you have a webserver running locally that listens on port 80 for requests made to webapp.dev?
Does the directory for webapp.dev in fact have "imagename.jpeg" in that location?
This just looks like a 404 because that address doesn't exist on your local environment, but does exist on your live one.
Or, the context of $this is different on your local environment than it is on your production environment. We can't tell that from your original post, though, because you've only provided that one line and the resulting error.

Related

Laravel 9 on IIS returns 500 server error when "php artisan serve" on same server works

I am trying to deploy a Laravel 9 site onto an IIS Server (and no, I don't have the option of using a Linux server). If I run the local server setup with "php artisan serve", it works fine through 127.0.0.1 on the server, including all calls to the database.
However, if I try to run the site through the IIS server via its domain name, I get a 500 server error. Failed Response Tracing shows a FASTCGI_UNKNOWN_ERROR: "The directory name is invalid. (0x8007010b)"
The DNS is functioning properly as I have tested a phpinfo page on it.
Is there a configuration in IIS I need to set in order for the Laravel site to work?
The problem is not clear, but try to point IIS directly to public folder of Laravel, like this:
Hope this helps!
It turned out I didn't have my php-cgi.exe file mapped accurately in IIS. I had to edit my Handler Mappings and link the FastCgiModule handler to my current installment of PHP, mapping it to the php-cgi.exe file.

Suddently can't access DO Spaces locally (Laravel)

I have a laravel site up and running. We have three copies currently working - local, staging and production.
Up until today all three of these were acccessing the same digitalocean spaces with no issue.
Today we are getting a timeout whenever a request is made from the local environment - it continues to work perfectly on staging and development. Our .env files are identical with the acception of app key / name etc. Our config file are identical. The code that makes the request is identical.
We are receiving the following error
Aws\S3\Exception\S3Exception: Error executing "ListObjects" o"https://example.com/?prefix=document.pdf%2F&max-keys=1&encoding-type=url"; AWS HTTP error: cURL error 28: Failed to connect to site.com port 443: Connection timed out (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://example.com/?prefix=document.pdf%2F&max-keys=1&encoding-type=url in file /var/www/html/vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php on line 195
We have tried everything we can think of. We have completly restarted the local servers (laravel sail) to no effect. The only difference is the local copy of the the site is served over http whereas both staging and production are served over https. This hasn't caused an issue in the past however.
Any ideas on what could be causing this would be greatly appretiated.
Thanks
To anyone who finds in the future.
The issues resolved itself after about 12 hours.
It is almost certain that this was an issues on DO's end.
If it occurs again I'll be contacting support as #James has pointed out.

Laravel Sub-subdomain

I've been trying to get my head around this all day. I understand how to create and manage single level subdomains in laravel, such as subdomain.domain.com
However, I'm trying to add a second level subdomain, for example: subsubdomain.subdomain.domain.com
I can get Homestead working fine with single level subdomains, but whenever I add the extra subdomain, I get a connection refused - unable to connect error in chrome.
There's nothing in the nginx error log either.
This is what I've done:
Open `~/.homestead/Homestead.yaml
Add the domain subsubdomain.subdomain.domain.com in addition to subdomain.domain.com
Save and exit, then run vagrant reload --provision
I can see the new sub-subdomain added to the hosts file, as well as a conf file created in the vagrant box
When I try to access subdomain.domain.com it works fine, when I try to access subsubdomain.subdomain.domain.com it fails with refused to connect.
I have no idea what to try next, there's nothing in the nginx error log, Homestead is up and running because I can access the single level subdomain completely fine. The only one that isn't working is the second level subdomain.
Any info on what I might be doing wrong, or anything else that might be helpful to debug would be greatly appreciated.
Update
I've managed to connect to the server if I add the port :8000 to the address: subsubdomain.subdomain.domain.com doesn't work, but subsubdomain.subdomain.domain.com:8000 works

Apache localhost/~username/ Forbidden

I am currently trying to get my apache webserver local host working so I can work on php websites on my own computer. I am running mac OSX 10.11.2.
I have tried every resource from every link I can find containing the same problem. I can access local host but when i try to access localhost/~username I get a 403 Forbidden error! Says I do not have permission. I have uncommented out all of the lines that I am told to do but I can not get it to work.
Any help is much appreciated
I can't figure out how to post my config, but I assure you I have uncommented everything I need to

Homestead/Vagrant refusing Image manipulation

I am running into a problem with Homestead. I have a piece of code that works well on an online dev server, but fails in the vagrant Homestead one.
The piece of code is an ajax executed one, where I upload an image, save it in a temp directory and send it back to the user, who then crops it. For this, I have two functions, tempUpload and tempCrop. It is failing in tempCrop, and most the line that triggers it is the following:
$img = getimagesize($imgUrl);
$imgUrl is an input with the url to an image in a temp folder. Checking in vagrant, I saw that these images have attributes -rwxrwxrwx 1 vagrant vagrant. The error appearing in the console is "SyntaxError: JSON.parse: unexpected character at line 1 column 2 of the JSON data".
Again, this works perfect in an online version, so I guess it is either a permission problem or some environment setting. The permissions for the temp folder in vagrant are as following: drwxrwxrwx 1 vagrant vagrant
I have checked in /app/storage/logs, and the error I'm getting is:
'getimagesize(http://nominate.app:8000/temp/2078ec37e959dd733930ad758854ce4cb5f175de.jpg): failed to open stream: Connection refused'
I really don't know what else to look into, specially since it is working fine in another dev environment I have, running centOS.
Any ideas?
Thanks a lot.
The answer would be to put the path rather than the url into getimagesize.
Basically, as far as your virtual machine knows, it's serving on port 80. Vagrant seamlessly forwards this port to the host machine's port 8000. When you call getimagesize, it first tries to resolve the hostname (nominate.app), and, if successful, it tries to initiate a connection to it on port 8000. I'm guessing that nominate.app is configured to resolve to 127.0.0.1 (the VM), which isn't actually listening on port 8000.
It's a bad idea to perform those sorts of operations over HTTP, since it'll slow things down and potentially generate multiple temporary copies of the same image. You can use Laravel's path helpers to help you determine the local path of the image (i.e. getimagesize(public_path() . "/temp/" . $filename)).
Changing my host file to point homestead.app to the Homestead VM's IP of 192.168.10.10 rather than 127.0.0.1 resolved this very same issue. Additionally, I can now navigate to http://homestead.app rather than http://homestead.app:8000.
Thanks to ClarkF for pointing me in the right direction to get this resolved!
As much as we don't enjoy "it works in develop but not in production", it's even more curious when it works in production but not in develop!

Resources