Posting object to Google Cloud Storage doesn't replace ${filename} variable - ruby

As described here https://cloud.google.com/storage/docs/xml-api/post-object
I can use ${filename} as part of the key when uploading a file to GCS from the browser (using a signed request).
That's great, because it's the exact same variable S3 uses. But - a problem. It doesn't actually replace ${filename} with the name of the file when I upload. This works perfectly on S3, but with GCS I literally get $%7Bfilename%7D as the key in the response - clearly it's not replacing this with the correct value.
I'm building the request like so:
export function uploadVideo(video) {
return new Promise((resolve, reject) => {
if (!video) {
resolve(null);
return;
}
// this fetches a presigned post object from the server:
request.get('/presigned_post')
.query({format: 'json'})
.end((err, response)=> {
if (!err && response.ok) {
request
.post(`https://${response.body.url.host}/`)
.field(response.body.fields)
.field('Content-Type', image.type)
.attach('file', image, image.name)
.end((err, response) => {
if (err) {
console.log("Error uploading image: ", err);
reject(err);
} else {
resolve({
location: response.text.match('<Location>' + '(.*?)' + '</Location>')[1],
bucket: response.text.match('<Bucket>' + '(.*?)' + '</Bucket>')[1],
key: response.text.match('<Key>' + '(.*?)' + '</Key>')[1],
checksum: response.text.match('<ETag>"' + '(.*?)' + '"</ETag>')[1]
});
}
});
}
});
});
}
The key is some_folder/${filename} and this is included in the form data to google (in response.fields).
You can see I am providing the filename with .attach('file', image, image.name). It is uploading correctly, just not replacing the ${filename} var.
edit
I have narrowed down the issue more. We query S3 and GCS to get the fields in the presigned post. With S3, if I give a key of ${filename} then I get exactly that returned in the fields:
def presigned_post(bucket_name:, key:, **opts)
response = s3_resource.
bucket(bucket_name).
presigned_post(key: key, content_type_starts_with: '', **opts)
{
fields: response.fields,
url: { host: URI.parse(response.url).host }
}
end
Fields here will contain "key"=>"${filename}".
However in the GCS case, the following code returns :key=>"$%7Bfilename%7D" in the fields:
# https://cloud.google.com/storage/docs/xml-api/post-object#usage_and_examples
# https://www.rubydoc.info/gems/google-cloud-storage/1.0.1/Google/Cloud/Storage/Bucket:post_object
def presigned_post(bucket_name:, key:, acl:, success_action_status:, expiration: nil)
expiration ||= (Time.now + 1.hour).iso8601
policy = {
expiration: expiration,
conditions: [
["starts-with", "$key", "" ],
["starts-with", "$Content-Type", "" ],
{ acl: acl },
{ success_action_status: success_action_status }
]
}
post_obj = get_bucket!(bucket_name).post_object(key, policy: policy)
url_obj = { host: URI.parse(post_obj.url).host }
# Have to manually merge in these fields
fields = post_obj.fields.merge(
acl: acl,
success_action_status: success_action_status
)
return { fields: fields, url: url_obj }
end
If i manually change the key in the GCS request fields, then it works. Is that really what I'm supposed to do?

A fix for this issue was released in google-cloud-storage v1.24.0.

Figured out the issue is caused by:
def ext_path
URI.escape "/#{#bucket}/#{#path}"
end
in lib/google/cloud/storage/file.rb of the google-cloud-ruby-gem.
This is called by def post_object in that same file.
I am going to raise an issue on their Github page.

Related

How do I create a HttpOrigin for Cloudfront to use a Lambda function url?

Trying to setup a Cloudfront behaviour to use a Lambda function url with code like this:
this.distribution = new Distribution(this, id + "Distro", {
comment: id + "Distro",
defaultBehavior: {
origin: new S3Origin(s3Site),
viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
},
additionalBehaviors: {
[`api-prd-v2/*`]: {
compress: true,
originRequestPolicy: originRequestPolicy,
origin: new HttpOrigin(functionUrl.url, {
protocolPolicy: OriginProtocolPolicy.HTTPS_ONLY,
originSslProtocols: [OriginSslPolicy.TLS_V1_2],
}),
allowedMethods: AllowedMethods.ALLOW_ALL,
viewerProtocolPolicy: ViewerProtocolPolicy.HTTPS_ONLY,
cachePolicy: apiCachePolicy,
},
The functionUrl object is created in a different stack and passed in to the cloudformation stack, the definition looks like:
this.functionUrl = new FunctionUrl(this, 'LambdaApiUrl', {
function: this.lambdaFunction,
authType: FunctionUrlAuthType.NONE,
cors: {
allowedOrigins: ["*"],
allowedMethods: [HttpMethod.GET, HttpMethod.POST],
allowCredentials: true,
maxAge: Duration.minutes(1)
}
});
The above code fails because "The parameter origin name cannot contain a colon".
Presumably, this is because functionUrl.url evaluates to something like https://xxx.lambda-url.ap-southeast-2.on.aws/ (note the https://) whereas the HttpOrigin parameter should just be the domain name like xxx.lambda-url.ap-southeast-2.on.aws.
I can't just write code to hack the url up though (i.e. functionUrl.url.replace("https://", "")) because when my code executes, the value or the url property is a token like ${Token[TOKEN.350]}.
The function url is working properly: if I hard-code the HttpOrigin to the function url's value (i.e. like xxx.lambda-url.ap-southeast-2.on.aws) - it works fine.
How do I use CDK code to setup the reference from Cloudfront to the function url?
I'm using aws-cdk version 2.21.1.
There is an open issue to add support: https://github.com/aws/aws-cdk/issues/20090
Use CloudFormation Intrinsic Functions to parse the url string:
cdk.Fn.select(2, cdk.Fn.split('/', functionUrl.url));
// -> 7w3ryzihloepxxxxxxxapzpagi0ojzwo.lambda-url.us-east-1.on.aws

FILE_REFERENCE_EXPIRED at upload.getFile with inputPhotoFileLocation

Can't get content from method upload.getFile using inputPhotoFileLocation, getting exeption FILE_REFERENCE_EXPIRED, readed lots of forums but cant get answer
I'm using MTProto client on js
this.call('upload.getFile', {
location: {
_: 'inputPhotoFileLocation',
id: message.media.photo.id,
access_hash: message.media.photo.access_hash,
file_reference: message.media.photo.file_reference, //tried
//Buffer.from(message.media.photo.file_reference.toString('hex'), 'hex'),
//[...message.media.photo.file_reference] and others
thumb_size: JSON.stringify(message.media.photo.sizes.find(size => size._ == 'photoSizeProgressive'))
},
offset: 0,
limit: 1024 * 1024
})
https://core.telegram.org/constructor/inputPhotoFileLocation
You must pass type from message.media.photo.sizes in the thumb_size field
That is, instead of JSON.stringify(message.media.photo.sizes.find(size => size._ == 'photoSizeProgressive')) you need to specify message.media.photo.sizes.find(size => size._ == 'photoSizeProgressive').type

How to set responsive images with CKEditor 5?

Using the Simple Upload Adapter, according to the CKEditor 5 documentation upon a successful image upload, the server should return either:
This for a single image:
{
default: 'http://example.com/images/image–default-size.png'
}
Or this for multiple (i.e. for the srcset attribute)
{
default: 'http://example.com/images/image–default-size.png',
'160': 'http://example.com/images/image–size-160.image.png',
'500': 'http://example.com/images/image–size-500.image.png',
'1000': 'http://example.com/images/image–size-1000.image.png',
'1052': 'http://example.com/images/image–default-size.png'
}
For starters the documentation is incorrect as per this SO post, so I'm not surprised this doesn't work. But does anyone know what response format CKEditor 5 is expecting for it to correctly insert/build the srcset attribute? It does take the default key but seems to ignore the others!
Inside of the _initListeners function of the upload adapter you will find that the Promise only resolves with the following:
resolve( {
default: response.url
} );
The solution - change the Promise to resolve the following:
resolve( response.urls );
Note, in this example the response object may have either keys url or urls.
I ended up using the following as I ignore null keys in my server responses.
if ( response.hasOwnProperty( 'url' ) ) {
resolve( {
default: response.url
} );
} else if ( response.hasOwnProperty( 'urls' ) ) {
resolve( response.urls );
}
As a sidenote, if you've read through the other SO post I referred to, I would also recommend removing this (see commented section):
if ( !response /** || !response.uploaded */ ) {
return reject( response && response.error && response.error.message ? response.error.message : genericError );
}
I'm not a fan of using arbitrary flags in response, if the upload failed then I would rather see a HTTP status code indicating it, I can't see any reason why we need to return 200 and { "uploaded" : false }

In eo.editor.js > storageManager, what does the "type: local " property mean?

In eo.editor.js > storageManager, what does the "type: local " property mean ?
storageManager: {
type: 'local',
},
Switching up the remote storage is very simple, it's just a matter of specifying your endpoints for storing and loading, which generally might be also the same (if you rely on HTTP methods).
const editor = grapesjs.init({
...
storageManager: {
type: 'remote',
stepsBeforeSave: 3,
urlStore: 'http://endpoint/store-template/some-id-123',
urlLoad: 'http://endpoint/load-template/some-id-123',
// For custom parameters/headers on requests
params: { _some_token: '....' },
headers: { Authorization: 'Basic ...' },
}
});
As you can see we've left some default option unchanged, increased changes necessary for autosave triggering and passed remote endpoints.
you can set type to local and remote

Goliath + Redis: Testing sequence of requests

I wrote an endpoint that has two operations: PUT and GET. GET retrieves what was PUT, so a good way to ensure both work would be to test them in sequence.
My cURL testing tells me the service works. However, I can't get my test that does the same thing to pass!
it "accepts authenticated PUT data" do
attributes1 = {
user: {
lat: '12.34',
lng: '56.78'
}
}
with_api(Location, { log_stdout: true, verbose: true }) do
put_request({ query: { auth_token: 'abc' }, body: attributes1.to_json }) do |client|
client.response_header.status.must_equal 201
get_request({ query: { auth_token: 'abc' } }) do |client|
client.response_header.status.must_equal 200
client.response.must_match_json_expression([attributes1[:user]])
end
end
end
end
Note that PUT accepts JSON in the body, and GET returns JSON. The service uses Redis as the datastore, and I use mock_redis to mock it.
test_helper.rb:
$redis = MockRedis.new
class Goliath::Server
def load_config(file = nil)
config['redis'] = $redis
end
end
top of spec:
before do
$redis.flushdb
end
The GET should retrieve what was just PUT, but instead it retrieves JSON "null".
Am I doing anything obviously wrong? Perhaps with how I am passing along body data to the request?
It would definitely help to get some better logging when my tests are running. I tried the { log_stdout: true, verbose: true } options, but this still only seems to output basic INFO logging from the Goliath server, which doesn't tell me anything about data and params.
Any help would be appreciated!

Resources