Elemental MediaConvert createJob cache control settings - aws-media-convert

I am converting mp4 videos in s3 into resized version and uploading to s3 using aws elemental mediaconvert. Is there a setting in the elemental media convert job which I can use to set the CacheControl max-age header so that it gets applied to the resized version of the video?
One of the job setting that I use is shared below
{
ContainerSettings: {
Container: "MP4",
Mp4Settings: {
CslgAtom: "INCLUDE",
FreeSpaceBox: "EXCLUDE",
MoovPlacement: "PROGRESSIVE_DOWNLOAD",
},
},
VideoDescription: {
Width: 720,
ScalingBehavior: "DEFAULT",
Height: 1280,
TimecodeInsertion: "DISABLED",
AntiAlias: "ENABLED",
Sharpness: 50,
CodecSettings: {
Codec: "H_264",
H264Settings: {
InterlaceMode: "PROGRESSIVE",
NumberReferenceFrames: 3,
Syntax: "DEFAULT",
Softness: 0,
GopClosedCadence: 1,
GopSize: 90,
Slices: 1,
GopBReference: "DISABLED",
MaxBitrate: 3000000,
SlowPal: "DISABLED",
SpatialAdaptiveQuantization: "ENABLED",
TemporalAdaptiveQuantization: "ENABLED",
FlickerAdaptiveQuantization: "DISABLED",
EntropyEncoding: "CABAC",
FramerateControl: "INITIALIZE_FROM_SOURCE",
RateControlMode: "QVBR",
QvbrSettings: {
QvbrQualityLevel: 7,
},
CodecProfile: "MAIN",
Telecine: "NONE",
MinIInterval: 0,
AdaptiveQuantization: "HIGH",
CodecLevel: "AUTO",
FieldEncoding: "PAFF",
SceneChangeDetect: "ENABLED",
QualityTuningLevel: "SINGLE_PASS",
FramerateConversionAlgorithm: "DUPLICATE_DROP",
UnregisteredSeiTimecode: "DISABLED",
GopSizeUnits: "FRAMES",
ParControl: "INITIALIZE_FROM_SOURCE",
NumberBFramesBetweenReferenceFrames: 2,
RepeatPps: "DISABLED",
},
},
AfdSignaling: "NONE",
DropFrameTimecode: "ENABLED",
RespondToAfd: "NONE",
ColorMetadata: "INSERT",
},
AudioDescriptions: [
{
AudioTypeControl: "FOLLOW_INPUT",
CodecSettings: {
Codec: "AAC",
AacSettings: {
AudioDescriptionBroadcasterMix: "NORMAL",
Bitrate: 96000,
RateControlMode: "CBR",
CodecProfile: "LC",
CodingMode: "CODING_MODE_2_0",
RawFormat: "NONE",
SampleRate: 48000,
Specification: "MPEG4",
},
},
LanguageCodeControl: "FOLLOW_INPUT",
},
],
Extension: ".mp4",
},
],
OutputGroupSettings: {
Type: "FILE_GROUP_SETTINGS",
FileGroupSettings: {
Destination: "",
DestinationSettings: {
S3Settings: {
AccessControl: {
CannedAcl: "PUBLIC_READ",
},
},
},
},
},
}

MediaConvert does not presently have a setting that allows adding user metadata (e.g. HTTP headers) to its outputs.
This needs to be implemented as a 'post processing' step to your workflow.
Since MediaConvert uses Amazon S3 as its output destination, you would typically set object metadata (i.e. Cache-Control headers) in Amazon S3 at the time you upload the object.
Once the object is uploaded, you cannot modify object metadata. The only way to modify object metadata is to make a copy of the object and set the metadata.
There are a few approaches to do this:
A. You can add cache-control headers to your objects using the Amazon S3 console.
The console handles the copying of the object with the new metadata behind the scenes.
Adding headers to your objects using the Amazon S3 console : https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#ExpirationAddingHeadersInS3
Editing object metadata in the Amazon S3 console : https://docs.aws.amazon.com/AmazonS3/latest/userguide/add-object-metadata.html
B. You can automate adding metadata by:
Configuring an S3 Event notification on your destination S3 bucket, that monitors for 'New object created events' and subsequently triggers Lambda function.
The Lambda function code then makes a copy of the recently created object (mp4 file) and sets the metadata.
Amazon S3 Event Notifications : https://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.html
Tutorial: Using an Amazon S3 trigger to invoke a Lambda function : https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
C. Since you are looking to set Cache-Control headers on your objects, I assume that also intend to make your transcoded videos available via a browser or shared cache (e.g. CDN).
Should you be using a CDN like CloudFront, you can opt to configure CloudFront to add one or more HTTP headers to the responses that it sends to viewers. Making these changes doesn't require writing code or changing the origin.
The HTTP headers that you can add include the Cache-Control header.
Adding HTTP headers to CloudFront responses : https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/adding-response-headers.html
To do so:
Go to AWS Console and navigate to the CloudFront distribution.
Go to Behaviors Tab
Select the respective / default behavior and go for the edit option to edit that behavior
There you can see the option Response headers policy – optional
You can make use of an existing response headers policy or create a new one
Go to Policies - Response Headers and click on "Create response header policy" under custom policies. If you have an existing policy edit it.
For me I have a policy already, so I edited that policy
Go for Custom headers – optional
Add the Cache-Control header along with the max-age value you want. This value can be set to override origin if you want it.
In your respective / default behavior, under the Response headers policy select the one you just created or edited.

Related

Envoy external processing filter: issues with Content-Length Header when processing request body

I'm having a bit of a struggle with using the External Processing filter for Envoy (described HERE. I'm currently using a similar setup to THIS EXAMPLE , where I want to do some processing on both the request and response bodies being sent to an upstream service using a GRPC server in Go.
The issue I'm facing is that when I try to make changes to the request body, meaning using a BodyMutation to send a new body (basically the same JSON but with one field altered as an example) the Content-Length header is missing when the request is forwarded to the upstream service, which causes an error. Yet if I don't alter the body (so no BodyMutation) the header is there as it should be.
Basically, if I try to set the processing response for the request body like so:
bytesToSend := b.RequestBody.Body
resp = &pb.ProcessingResponse{
Response: &pb.ProcessingResponse_RequestBody{
RequestBody: &pb.BodyResponse{
Response: &pb.CommonResponse{
HeaderMutation: &pb.HeaderMutation{
SetHeaders: []*core.HeaderValueOption{
{
Header: &core.HeaderValue{
Key: "Content-Length",
Value: strconv.Itoa(len(bytesToSend)),
},
},
},
},
BodyMutation: &pb.BodyMutation{
Mutation: &pb.BodyMutation_Body{
Body: bytesToSend,
},
},
},
},
},
ModeOverride: &v3.ProcessingMode{
ResponseHeaderMode: v3.ProcessingMode_SEND,
ResponseBodyMode: v3.ProcessingMode_NONE,
},
}
The Content-Length is removed completely. But if I don't set the BodyMutation, like so:
bytesToSend := b.RequestBody.Body
resp = &pb.ProcessingResponse{
Response: &pb.ProcessingResponse_RequestBody{
RequestBody: &pb.BodyResponse{
Response: &pb.CommonResponse{
HeaderMutation: &pb.HeaderMutation{
SetHeaders: []*core.HeaderValueOption{
{
Header: &core.HeaderValue{
Key: "Content-Length",
Value: strconv.Itoa(len(bytesToSend)),
},
},
},
},
},
},
},
ModeOverride: &v3.ProcessingMode{
ResponseHeaderMode: v3.ProcessingMode_SEND,
ResponseBodyMode: v3.ProcessingMode_NONE,
},
}
Then the Content-Length is present, and the HeaderMutation does set it to whatever I set it to (in the example it's just the length of the body again).
I'm not entirely clear on the behaviour here. Is there something in particular I should be looking out for, either in this code or Envoy's configuration?
Any pointers would be greatly appreciated.
EDIT: Seems like the behaviour here is: when the body is mutated, it switches to chunked transfer encoding, therefore removing the content length. Makes sense, but then if the upstream service requires a content length it causes issues. The Header Mutation is ignored, so computing the content length manually and sending it doesn't seem to work.

Generating number of thumbnails depending on video size using AWS MediaConvert

After reading this article I get the sense that AWS media convert job template cannot be re-used to generate thumbnails of arbitrary video size. The article assumes that we know the size/duration of video uploaded upfront hence the number of thumbnails we desire.
What I am looking for is to generate a random number of thumbnails based of video size (e.g large number of thumbnails for large video and small number of thumbnails for small video). I approached this solution using lambda trigger and ffmpeg lambda layer but lambda function timeouts (15 minutes max) for videos larger than 150MB (since it takes time to read the video from the s3 bucket).
What are my options to process large number of video , generate a variable number of thumbnails, merge those thumbs to generate a sprite?
I tried lambda trigger with ffmpeg/ffprob to generate sprite, but that has timeout issue. Now I have setup a cloud watch event rule to trigger lambda function on mediaconvert job status change( completed)
and merge the thumbs to generate sprite, which seems much lighter but I need an arbitrary number of thumbs.
You could switch from using ffprobe to MediaInfo [1]. This can get you the duration and file size of an input file. From there you can use the calculations from the article and use MediaConvert to create your sprite images. After the job completes you can use the COMPLETE CloudWatch Event to trigger a postprocessing workflow that can create the sprite manifest. By default the COMPLETE details section will include the output path to the last JPEG frame capture from your MediaConvert output [2].
As a pre-processing step you can gather the duration of the file as well as the size. If you only require the size of the file, then you can use the S3 SDK to get the content-length of the file (calls HEAD to the object) and calculate from there.
s3 = boto3.client('s3')
response = s3.head_object(Bucket='bucket', Key='keyname')
size = response['ContentLength']
print(size)
You can also use parameter overrides in job templates. For example you can create a job that reference a template but you can specify settings that override those settings with you call the CreateJob API.
The following is for presets however the same concept applies to Job Templates.
Using the following HLS System Presets
System-Avc_16x9_720p_29_97fps_3500kbps
System-Avc_16x9_360p_29_97fps_1200kbps
System-Avc_16x9_270p_14_99fps_400kbps
JSON Payload changes the Audio Selector Source Name for Output 2 (System-Avc_16x9_360p_29_97fps_1200kbps) and Output 3 (System-Avc_16x9_270p_14_99fps_400kbps)
{
"Queue": "arn:aws:mediaconvert:us-west-2:111122223333:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::111122223333:role/EMFRoleSPNames",
"Settings": {
"OutputGroups": [
{
"Name": "Apple HLS",
"Outputs": [
{
"NameModifier": "_1",
"Preset": "System-Avc_16x9_720p_29_97fps_3500kbps"
},
{
"NameModifier": "_2",
"Preset": "System-Avc_16x9_360p_29_97fps_1200kbps",
"AudioDescriptions": [
{
"AudioSourceName": "Audio Selector 2"
}
]
},
{
"NameModifier": "_3",
"Preset": "System-Avc_16x9_270p_14_99fps_400kbps",
"AudioDescriptions": [
{
"AudioSourceName": "Audio Selector 2"
}
]
}
],
"OutputGroupSettings": {
"Type": "HLS_GROUP_SETTINGS",
"HlsGroupSettings": {
"ManifestDurationFormat": "INTEGER",
"SegmentLength": 10,
"TimedMetadataId3Period": 10,
"CaptionLanguageSetting": "OMIT",
"Destination": "s3://myawsbucket/out/master",
"TimedMetadataId3Frame": "PRIV",
"CodecSpecification": "RFC_4281",
"OutputSelection": "MANIFESTS_AND_SEGMENTS",
"ProgramDateTimePeriod": 600,
"MinSegmentLength": 0,
"DirectoryStructure": "SINGLE_DIRECTORY",
"ProgramDateTime": "EXCLUDE",
"SegmentControl": "SEGMENTED_FILES",
"ManifestCompression": "NONE",
"ClientCache": "ENABLED",
"StreamInfResolution": "INCLUDE"
}
}
}
],
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"Offset": 0,
"DefaultSelection": "DEFAULT",
"ProgramSelection": 1,
"SelectorType": "TRACK",
"Tracks": [
1
]
},
"Audio Selector 2": {
"Offset": 0,
"DefaultSelection": "NOT_DEFAULT",
"ProgramSelection": 1,
"SelectorType": "TRACK",
"Tracks": [
2
]
}
},
"VideoSelector": {
"ColorSpace": "FOLLOW"
},
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"TimecodeSource": "EMBEDDED",
"FileInput": "s3://myawsbucket/input/test.mp4"
}
],
"TimecodeConfig": {
"Source": "EMBEDDED"
}
}
}
== Resources ==
[1] https://aws.amazon.com/blogs/media/running-mediainfo-as-an-aws-lambda-function/
[2] https://docs.aws.amazon.com/mediaconvert/latest/ug/file-group-with-frame-capture-output.html

How to download an image/media using telegram API

I want to start by saying that this question is not for telegram bot API. I am trying to fetch images from a channel using telegram core API. The image is in the media property of the message object
"_": "message",
"pFlags": {
"post": true
},
"flags": 17920,
"post": true,
"id": 11210,
"to_id": {
"_": "peerChannel",
"channel_id": 1171605754
},
"date": 1550556770,
"message": "",
"media": {
"_": "messageMediaPhoto",
"pFlags": {},
"flags": 1,
"photo": {
"_": "photo",
"pFlags": {},
"flags": 0,
"id": "6294134956242348146",
"access_hash": "11226369941418527484",
"date": 1550556770,
I am using the upload.getFile API to fetch the file. Example is
upload.getFile({
location: {
_: 'inputFileLocation',
id: '6294134956242348146',
access_hash: '11226369941418527484'
},
limit: 1000,
offset: 0
})
But the problem is it throws the error RpcError: CODE#400 LIMIT_INVALID. From looking at the https://core.telegram.org/api/files it looks like limit value is invalid. I tried giving limit as
1024000 (1Kb)
20480000 (20Kb)
204800000 (200kb)
But it always return the same error.
For anyone who is also frustrated with the docs. Using, reading and trying out different stuff will ultimately work for you. If possible someone can take up the task of documenting the wonderful open source software.
Coming to the answer, the location object shouldn't contain id or access hash like other APIs rather it has its own parameters as defined in telegram schema.
There is a media property to a message which has a sizes object. This will contains 3 or more size options (thumbnail, preview, websize and so on). Choose the one that you will need and use the volume_id, local_id and secret properties. The working code will look something like this.
upload.getFile({
location: {
_: 'inputFileLocation', (This parameter will change for other files)
volume_id: volumeId,
local_id: localId,
secret: secret
},
limit: 1024 * 1024,
offset: 0
}, {
isFileTransfer: true,
createClient: true
})
The following points should be noted.
Limit should be in bytes (not bits)
Offset will be 0. But if its big file use this and limit to download parts of the file and join them.
Additional parameters such as isFileTransfer and createClient also exists. I haven't fully understood why its needed. If I have time I'll update it later.
Try using a library that's built on top the original telegram library. I'm using Airgram, a JS/TS library which is a well maintained Repo.

Google Drive API upload file with custom property, properties missing from response

I am trying to save a google drive file with custom properties, but its not working and due to the very poor documentation of the Google API, i am struggeling with this. Here is my code so far:
$g = new FileCtrl();
$fileMeta = new Google_Service_Drive_DriveFile();
$fileMeta->name = $file['name'];
$fileMeta->parents = [$g->googleDriveFolderId];
$fileMeta->appProperties = (object)['lead_id'=>$file['lid'], 'sales_package_id'=>$file['pid'], 'user_id'=>$user->uid];
//$fileMeta->size = $file['size'];
$fileData = array(
'data' => $fileContent,
'mimeType' => $file['media_type'],
'uploadType' => $file['multipart'],
);
//create file in google drive
$res = $g->client->files->create($fileMeta, $fileData);
It is not returning an error and the event is saved, but without the custom properties!?
You are probably looking for the properties in the returned file resource.
The reason they aren't there is that Drive only returns a small number of the file properties (name, mime type, id). If you want to see the full file resource you need to include fields=* in the request. So a correct request would look something like
POST https://content.googleapis.com/drive/v3/files?fields=*
I don't know the PHP library, so you'll need to figure that bit out. It will be something like
'fields' => '*',
I just tested this.
Request:
{
"name": "Linda Test",
"mimeType": "application/vnd.google-apps.document",
"appProperties": {
"TestingId": "1"
}
}
Response:
{
"kind": "drive#file",
"id": "1mhP2EW4Kbl81F5AuJ4zJ2IPoeI56i_Vd5K-dfGJRj6Y",
"name": "Linda Test",
"mimeType": "application/vnd.google-apps.document"
}
Do a file.get to check the file metadata.
{
"kind": "drive#file",
"id": "1mhP2EW4Kbl81F5AuJ4zJ2IPoeI56i_Vd5K-dfGJRj6Y",
"name": "Linda Test",
"mimeType": "application/vnd.google-apps.document",
"starred": false,
"trashed": false,
"explicitlyTrashed": false,
"parents": [
"0AJpJkOVaKccEUk9PVA"
],
"appProperties": {
"TestingId": "1"
},
...
}
I suggest you check your uploaded file using the File.get method after its returned to be sure that it was added. To my knowledge there is now way to see these added fields via the Google drive website interface. If after ruining a file.get you still find that it wasn't uploaded i suggest you log this as a bug with the Google APIs PHP client library team. This works with raw rest requests it could be an issue with the library.

How do you set a class-level permission using Parse Server?

I'm migrating an app from Parse.com (hosted) to a self-hosted* Parse Server implementation (https://github.com/ParsePlatform/parse-server).
My app relies heavily on Class-Level Permissions so that, for example, global settings can be read by all users with the public key but not edited or deleted without the master key.
On Parse.com these permissions are configurable in the dashboard, but the open source Parse Server doesn't have a dashboard. Maybe there's some way to do this using cloud code?
*I'm using App Engine's managed VM environment although I'm not sure that's relevant.
Using Rest API:
// POST http://my-parse-server.com/schemas/Announcement
// Set the X-Parse-Application-Id and X-Parse-Master-Key header
// body:
{
classLevelPermissions:
{
"find": {
"requiresAuthentication": true,
"role:admin": true
},
"get": {
"requiresAuthentication": true,
"role:admin": true
},
"create": { "role:admin": true },
"update": { "role:admin": true },
"delete": { "role:admin": true }
}
}
Ref:
https://docs.parseplatform.org/rest/guide/#requires-authentication-permission-requires-parse-server---230
https://docs.parseplatform.org/rest/guide/#class-level-permissions
I found the _SCHEMA table in the mongodb datastore and it contains the CLPs I was looking for. Apparently that's where they're stored and they were migrated along with the rest of my data from Parse.com. I've confirmed that updating those values affects data access permissions

Resources