Amazon S3 Upload with Laravel Livewire 404 - laravel

Everytime I tried to upload it gives me an error of
Error executing "HeadObject" on *BUCKET URL* AWS HTTP error: Client error: `HEAD *BUCKET URL* ` resulted in a `404 Not Found` response NotFound (client): 404 Not Found
I can't seem to figure out what the problem is, I set up everything in the .env file, IAM, S3, and S3 CORS configurations.
$this->logo->storeAs('logos', $file_name, 's3');$this->logo->storeAs('logos', $file_name, 's3');
This is what I'm using for my uploading, it works when I don't use s3.
My CORS Config
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*My Website*"
],
"ExposeHeaders": []
}

Related

Laravel Vite: Assets blocked/Mixed Content issues in production environment

I'm hosting my App on an EC2-instance behind an Elastic Load Balancer which manages my SSL-Certificate. On this EC2-Instance my nginx-configuration is redirecting all http-Requests to https.
I recently switched to Vite which caused me a lot of trouble. When I push my app to the server after calling npm run build my assets are blocked. In the browser console I get:
Mixed Content: The page at 'example.com' was loaded over HTTPS, but requested an insecure ...
My Setup:
vite.config.js
export default defineConfig({
server: {
host: 'localhost',
},
plugins: [
laravel([
'resources/assets/sass/app.sass',
// etc...
]),
vue({
template: {
transformAssetUrls: {
base: null,
includeAbsolute: false,
},
},
}),
],
});
Setting "https: true" in the server-block didn't help me.
.env
APP_ENV=production
APP_URL=https://example.com
ASSET_URL=https://example.com
In my blade template I'm using the Vite-directive:
#vite('resources/assets/sass/app.sass')
I tried the following solutions:
Setting $proxies = '*' in TrustProxies.php, which doesn't have any effect.
Setting URL::forceScheme('https'); in AppServiceProvider.php, which will load the assets but lead to a lot of other issues.
Somehow the #vite-directive is not resolving my assets as secure assets. With Laravel Mix I could just call secure_asset.
How can I fix this?
In the end I used the TrustedProxies-middleware. However back then I forgot to register it as global middleware.
import fs from 'fs';
const host = 'example.com';
server: {
host,
hmr: {host},
https: {
key: fs.readFileSync(`ssl-path`),
cert: fs.readFileSync(`ssl-cert-path`),
},
},
you should add this to vite.config.js file along with ASSET_URL to your .env file

Unable to delete an image from the media-library in Strapi V4

I have a Strapi V4 dashboard deployed on heroku. Everything works fine, except for some images not being able to be deleted, with a Status code: 500 error.
plugins.js file below
screenshot of error on strapi
upload: {
config: {
provider: "cloudinary",
providerOptions: {
cloud_name: env("CLOUDINARY_NAME"),
api_key: env("CLOUDINARY_KEY"),
api_secret: env("CLOUDINARY_SECRET"),
},
actionOptions: {
upload: {},
delete: {},
},
},
},
The fix was to comment out config of cloudinary in plugins.js
That is not really SOLVING THE ISSUE:
Ok, after deactivating the cloudinary plugin the images in strapi “Media Library” can be deleted. But the same issue happens again when activating the cloudinary plugin.

Use Laravel Socialite user returns missing authorization_code

I'm using Laravel Socialite to login with SuperOffice API. Have only just added the provider as a pr but testing it already. I'm using the provider superoffice locally and inside a package superoffice-api I'm creating. Have added both packages to composer.json in the Laravel app:
"repositories": [
{
"type": "path",
"url": "./packages/superoffice-api"
},
{
"type": "path",
"url": "./packages/superoffice"
}
]
Also added the superoffice Socialite provider in the superoffice-api composer.json in the same way.
The login process is working but the problem starts when trying to use the user for other API calls. What I mean with this is on the callback I can do the following:
public function superofficeCallback(Request $request): RedirectResponse
{
$user = Socialite::driver('superoffice')->stateless()->user();
return redirect()->route('dashboard.index')->with([
'message' => 'Loggedin with SuperOffice as '.$user->name,
'success' => true,
]);
}
This shows the $user->name as expected. Now when trying to call Socialite::driver('superoffice')->stateless()->user() in the superoffice-api package I get the following error message:
GuzzleHttp\Exception\ClientException: Client error: POST https://sod.superoffice.com/login/common/oauth/tokens resulted in a 400 Bad Request response:
{ "error": "invalid_request", "error_description": "missing authorization_code"}
It doesn't matter if called in a method or in the __construct() of a class.
So my question is how can I use a Socialite provider superoffice user in a package superoffice-api? This is needed to get the access_token. Can imagine that because Socialite is called in a package some sort of reference is missing.
The problem here is that the access_token and refresh_token need to be stored in some other way in the callback function. When stored for example in the database you're able to use the tokens anywhere.

Parse Server S3 file adapter with Heroku app

I am trying to set up the s3 file adapter but I'm not sure if i am getting the formatting of something incorrect or something. I have followed this:
https://github.com/ParsePlatform/parse-server/wiki/Configuring-File-Adapters#configuring-s3adapter
Guide exactly but when i uncomment the block of code below and put in my aws credentials then push the setup back to Heroku the app or dashboard won't start any longer, saying there is an application error:
//**** File Storage ****//
filesAdapter: new S3Adapter(
{
"xxxxxxxx",
"xxxxxxxx",
"xxxxxxxx",
{directAccess: true}
}
)
I would set it up as follows for Heroku:
Make sure that after performing all steps described in the guide your policy looks similar to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME",
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
Now apply this policy to the bucket: select your bucket in S3 console, tap ‘Properties’ button in the top right corner. Expand ‘Permissions’ section, press ‘Edit bucket policy’ and paste json above in the text field.
Configure Parse Server in the index.js file:
var S3Adapter = require('parse-server').S3Adapter;
var s3Adapter = new S3Adapter(
"AWS_KEY",
"AWS_SECRET_KEY",
"bucket-name",
{ directAccess: true }
);
and add two lines to the Parse Server init (var api = new ParseServer({..})):
filesAdapter: s3Adapter,
fileKey: process.env.PARSE_FILE_KEY
Similar to Cliff's post, .S3Adapter has to be outside the ()
var S3Adapter = require('parse-server').S3Adapter;
And then inside parse server init:
filesAdapter: new S3Adapter(
{
accessKey: process.env.S3_ACCESS_KEY || '',
secretKey: process.env.S3_SECRET_KEY || '',
bucket: process.env.S3_BUCKET || '',
directAccess: true
}
)
This worked in this case.

AWS SDK for PHP returns error when trying to get objects thru Cloudflare with CNAME (Laravel 5.1)

I have Amazon S3 bucket named mysub.domain.com and tryin to put or get data from it thru Cloudflare's CDN (app based on Laravel 5.1 with CodeSleeve/laravel-stapler depends on aws/aws-sdk-php).
My Amazon S3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mysub.domain.com/*"
}
]
}
And CORS Configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
`
My Stapler config for s3:
`
's3_client_config' => [
...
'endpoint' => 'https://mysub.domain.com',
...
],
's3_object_config' => [
'Bucket' => 'mysub.domain.com',
...
],
I've created CNAME for my subdomain to Amazon's S3 bucket on Cloudflare as mentioned in documentation:
mysub.domain.com CNAME mysub.domain.com.s3.amazonaws.com
It work's without endpoint, but not going thru CDN, because using urls like s3.amazonaws.com/mysub.domain.com (path-style), but when i added endpoint it uses https://mysub.domain.com/mysub.domain.com (uses endpoint and bucket name). It must anyway add objects in bucket's path /mysub.domain.com/path/to/file.jpg, but it gaves me an error:
Aws\S3\Exception\SignatureDoesNotMatchException: AWS Error Code: SignatureDoesNotMatch, Status Code: 403, AWS Request ID: ABDC27DF1F472901, AWS Error Type: client, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method.
But as i said it works without endpoint.
Is there any way to avoid this error and duplicates of bucket name in url (maybe there's any way to switch it to domain-styled url)
Thank you in advance.
It's hard to say for sure with the details provided, but it may be that you have a bucket CNAME setup incorrectly. You may want to look at this Help Center article.
If that doesn't work you should send an email to support[at]cloudflare[dot]com so they can dig deeper.

Resources