I'm trying to use AWS MediaConvert on a .mov file that has an alpha channel. I couldn't find a way to preserve the alpha channel. Seems like it's always discarded and I'm losing the transparency.
Even the input QuickTime settings have only two options - Discard and Remap to Luma which turns anything BUT the alpha channel to white (?!).
Any help would be appreciated to preserve the alpha channel.
Thank you
{
"Queue": "arn:aws:mediaconvert:us-east-1:xxx:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::xxx:role/service-role/MediaConvert_Default_Role",
"Settings": {
"TimecodeConfig": {
"Source": "ZEROBASED"
},
"OutputGroups": [
{
"CustomName": "test vp9",
"Name": "DASH ISO",
"Outputs": [
{
"ContainerSettings": {
"Container": "MPD"
},
"VideoDescription": {
"CodecSettings": {
"Codec": "VP9",
"Vp9Settings": {
"RateControlMode": "VBR",
"Bitrate": 1000000
}
}
},
"NameModifier": "_output1"
}
],
"OutputGroupSettings": {
"Type": "DASH_ISO_GROUP_SETTINGS",
"DashIsoGroupSettings": {
"SegmentLength": 30,
"Destination": "s3://xxx/",
"FragmentLength": 2
}
}
}
],
"Inputs": [
{
"VideoSelector": {
"AlphaBehavior": "DISCARD"
},
"TimecodeSource": "ZEROBASED",
"FileInput": "https://xxx/yyy.mov"
}
]
},
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_60",
"Priority": 0
}
Thanks for your message. Regarding inputs with an alpha channel: The two supported formats are PNG stills with alpha OR a quicktime mov file using QTRLE Codec with alpha (RGBA). Other Quicktime codecs will fail. A mov using the animation codec content without alpha (only RGB) will fail.
Using either of the formats mentioned above, then positioning your overlay object correctly on the output canvas, should get you a properly keyed overlay. Preserving the alpha on output will require a target output format which supports alpha. I hope that helps.
Related
I have a slack bot that uses a menu drop down, and it has a color bar on the side.
See the screenshot of what I have
here in the green circle
I want the bar to extend the whole message like this image
Note: This image is edited to show the example of the red bar (and because the actual slack bot message isn't important)
My code has something like
let slackPost = {
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": myText
}
} // ... some other blocks
],
"attachments": [
{
"text": menuTitle,
"color": menuBarColor,
"attachment_type": "default",
"actions": [
{
"name": menuName,
"text": menuPlaceHolder,
"type": "select",
"options": menuOptions
}
]
}
]
}
The new slack blocks layout doesn't allow the old color attachments property. You can find official documentation here.
There is one exception, and that's the color parameter, which currently does not have a block alternative. If you are strongly attached (🎺) to the color bar, use the blocks parameter within an attachment.
You can nest blocks inside the attachment property, like this:
let slackPost = {
"attachments": [{
"color": message.color,
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": myText
}
} // ... some other blocks
]
}]
I am trying to run annotation on a video using the Google Cloud Video Intelligence API. Annotation requests with just one feature request (i.e., one of "LABEL_DETECTION", "SHOT_CHANGE_DETECTION" or "EXPLICIT_CONTENT_DETECTION"), things work fine. However, when I request an annotation with two or more features at the same time, the response does not always return all the request feature fields. For example, here is a request I ran recently using the API explorer:
{
"features": [
"EXPLICIT_CONTENT_DETECTION",
"LABEL_DETECTION",
"SHOT_CHANGE_DETECTION"
],
"inputUri": "gs://gccl_dd_01/Video1"
}
The operation Id I got back is this: "us-east1.11264560501473964275". When I run a GET with this Id, I have the following response:
200
{
"name": "us-east1.11264560501473964275",
"metadata": {
"#type": "type.googleapis.com/google.cloud.videointelligence.v1.AnnotateVideoProgress",
"annotationProgress": [
{
"inputUri": "/gccl_dd_01/Video1",
"progressPercent": 100,
"startTime": "2018-08-06T17:13:58.129978Z",
"updateTime": "2018-08-06T17:18:01.274877Z"
},
{
"inputUri": "/gccl_dd_01/Video1",
"progressPercent": 100,
"startTime": "2018-08-06T17:13:58.129978Z",
"updateTime": "2018-08-06T17:14:39.074505Z"
},
{
"inputUri": "/gccl_dd_01/Video1",
"progressPercent": 100,
"startTime": "2018-08-06T17:13:58.129978Z",
"updateTime": "2018-08-06T17:16:23.230536Z"
}
]
},
"done": true,
"response": {
"#type": "type.googleapis.com/google.cloud.videointelligence.v1.AnnotateVideoResponse",
"annotationResults": [
{
"inputUri": "/gccl_dd_01/Video1",
"segmentLabelAnnotations": [
...
],
"shotLabelAnnotations": [
...
],
"shotAnnotations": [
...
]
}
]
}
}
The done parameter for the response is set to true, but it does not have any field containing the annotations for Explicit Content.
This issue seems to be occurring at random to my novice eyes. The APIs will return a response with all parameters on some occasions and be missing one on others. I am wondering if there is anything I am missing here or something on my end that is causing this?
I did some tests using just LABEL_DETECTION, just EXPLICIT_CONTENT_DETECTION and using the three of them.
As I am not using videos with explicit content, I don't see any specific field when adding just EXPLICIT_CONTENT_DETECTION:
{
"name": "europe-west1.462458490043912485",
"metadata": {
"#type": "type.googleapis.com/google.cloud.videointelligence.v1.AnnotateVideoProgress",
"annotationProgress": [
{
"inputUri": "/cloud-ml-sandbox/video/chicago.mp4",
"startTime": "2018-08-07T14:18:40.086713Z",
"updateTime": "2018-08-07T14:18:40.230351Z"
}
]
}
}
Can you share a specific video sample, the request.json used and two different outputs, please?
I am using the Bot framework to create a chat bot. And I want to create message card (Hero or Thumbnail).
If you look at the skype bot api doc, there is a way to encode image bytes directly as the content url. https://docs.botframework.com/en-us/skype/chat/
"type": "message/image",
"attachments": [
{
"contentUrl": "<base64 encoded image>",
"thumbnailUrl": "<base64 encoded thumbnail>", // optional
"filename": "bear.jpg" // optional
}
]
This works fine for showing an image only. But I want the image to be part of a card.
The card is
{
"type":"message/card.carousel",
"attachments":[
{
"contentType":"application/vnd.microsoft.card.hero",
"content":{
"images":[
{
"image":"https://foo.com/path/image.jpg",
}
I have tried to set the image url property to the encoded bytes, but the client can't show it. What's the best way to achieve this?
You've got the basic idea. Use this instead:
"attachments": [
{
"contentType": "application/vnd.microsoft.card.hero",
"content": {
"title": "Title",
"subtitle": "SubTitle",
"text": "Text",
"images": [
{
"url": "image/jpeg;base64,{YOUR IMAGE}",
"alt": "Alt Image Description"
}
]
}
}
],
I am building an iOS app to cast video content from DLNA/UPnP media server to Chromecast. The problem appears when I add media track data for the captions. The following works:
### Media Manager - LOAD: {
"type":"load",
"I":false,
"defaultPrevented":false,
"kb":true,
"data":
{
"currentTime":0,
"requestId":3,
"autoplay":true,
"media":
{
"metadata":
{
"title":"movie.mp4",
"subtitle":"Unknown",
"images":
[{
"url":"http://192.168.1.15:8895/resource/625/COVER_IMAGE",
"width":200,
"height":100
}],
"metadataType":0
},
"textTrackStyle":
{
"windowRoundedCornerRadius":8,
"windowType":"ROUNDED_CORNERS",
"foregroundColor":"#FFFFFFFF",
"fontFamily":"Helvetica",
"fontGenericFamily":"SANS_SERIF",
"fontStyle":"BOLD",
"fontScale":1,
"windowColor":"#000000FF",
"backgroundColor":"#000000FF"
},
"contentId":"http://192.168.1.15:8895/resource/338/MEDIA_ITEM/AVC_MP4_MP_SD_AAC_MULT5-0/ORIGINAL",
"contentType":"video/mp4",
"streamType":"NONE",
"duration":0
}
},
"senderId":"33:3EA56D16-D18E-4D13-87A0-717DC188F8AF"}
but when I add captions either my own or for instance from CastVideos sample app (so to make sure it is CORS enabled), it won't work with:
### Media Manager - LOAD: {
"type":"load",
"I":false,
"defaultPrevented":false,
"kb":true,
"data":
{
"currentTime":0,
"requestId":4,
"autoplay":true,
"media":
{
"tracks":
[{
"trackId":1,
"trackContentId":"https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/GoogleIO-2014-CastingToTheFuture2-en.vtt",
"language":"en-US",
"subtype":"CAPTIONS",
"type":"TEXT",
"trackContentType":"text/vtt",
"name":"English Subtitle"
}],
"contentId":"http://192.168.1.15:8895/resource/338/MEDIA_ITEM/AVC_MP4_MP_SD_AAC_MULT5-0/ORIGINAL",
"streamType":"NONE",
"contentType":"video/mp4",
"duration":0,
"metadata":
{
"title":"movie.mp4",
"subtitle":"Unknown",
"images":
[{
"url":"http://192.168.1.15:8895/resource/625/COVER_IMAGE",
"width":200,
"height":100
}],
"metadataType":0
},
"textTrackStyle":
{
"windowRoundedCornerRadius":8,
"windowType":"ROUNDED_CORNERS",
"foregroundColor":"#FFFFFFFF",
"fontFamily":"Helvetica",
"fontGenericFamily":
"SANS_SERIF",
"fontStyle":"BOLD",
"fontScale":1,
"windowColor":
"#000000FF",
"backgroundColor":"#000000FF"
}
}
},
"senderId":"11:9D7D43C2-CD85-4143-82AA-2D0056AA62FC" }
I get:
[cast.receiver.MediaManager] Load metadata error cast_receiver.js:18
However the Sample app works with captions in my custom receiver without any problems and I don't find much of a difference (except for the video/image http URL, I'm just getting warnings for those):
### Media Manager - LOAD: {
"type":"load",
"I":false,
"defaultPrevented":false,
"kb":true,
"data":
{
"currentTime":0,
"requestId":3,
"autoplay":true,
"media":
{
"tracks":
[{
"trackId":1,
"trackContentId":"https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/DesigningForGoogleCast-en.vtt",
"language":"en-US",
"subtype":"CAPTIONS",
"type":"TEXT",
"trackContentType":"text/vtt",
"name":"English Subtitle"
}],
"contentId":"https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/DesigningForGoogleCast.mp4",
"streamType":"NONE",
"contentType":"video/mp4",
"duration":0,
"metadata":
{
"title":"Designing For Google Cast",
"subtitle":"Google IO - 2014",
"images":
[{
"url":"http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/images_480x270/DesigningForGoogleCast2-480x270.jpg",
"width":200,
"height":100
}],
"metadataType":0
},
"textTrackStyle":
{
"windowRoundedCornerRadius":8,
"windowType":"ROUNDED_CORNERS",
"foregroundColor":"#FFFFFFFF",
"fontFamily":"Helvetica",
"fontGenericFamily":"SANS_SERIF",
"fontStyle":"BOLD",
"fontScale":1,
"windowColor":"#000000FF",
"backgroundColor":"#000000FF"
}
}
},
"senderId":"46:F162E34A-1A6D-4C0C-A0A3-DB584C92CDF5" }
any idea?
thanks in advance.
When using captions with Chromecast, CORS is required not only for the captions but for the video stream as well. In my case the video HTTP server (e.g. Serviio) doesn't support it.
I 'm trying to generate svg maps from the GEOFLA shapefiles.
Using 'bbox' bounds mode with manually setting the bbox values works well :
{
"layers": [{
"id": "depts",
"src": "data/DEPARTEMENTS/DEPARTEMENT.shp",
"filter": {"CODE_REG": "24"},
"simplify": {
"method": "distance",
"tolerance": 8
},
"attributes": "all"
}],
"bounds": {
"mode": "bbox",
"data": [-4.5, 42, 8, 48],
},
"export": {
"width": 600,
"ratio": 0.8
}
}
But when setting the bounds mode to 'polygons', then i get an empty svg map :
{
"layers": [{
"id": "depts",
"src": "data/DEPARTEMENTS/DEPARTEMENT.shp",
"filter": {"CODE_REG": "24"},
"simplify": {
"method": "distance",
"tolerance": 8
},
"attributes": "all"
}],
"bounds": {
"mode": "polygons",
"data": {
"layer": "depts"
},
"padding": 0.06
},
"export": {
"width": 600,
"ratio": 0.8
}
}
I had a look in kartograph files and i noticed that the "get_features" method in "map.py" return a Polygon which coordinates doesn't intersect with the features geometry previouly extracted from the shapefile.
Then, each feature are throw away in the "get_features" method of the "maplayer.py" file when checking if feature geometry intersects with the "layer.map.view_poly" property.
I had a similar problem using GEOFLA file projection.
The solution I've found is basically to change my shapefile projection using QGIS. My idea was to use the projection of the shapefile given in installation guide which worked for me.
Get example shape file from kartograph installation page
Load this vector layer in QGIS Add your GEOFLASH layer in QGIS
Right-click on GEOFLASH layer and "Save as..." menu
In the save window, give a new name for your layer (eg : DEPARTEMENT_WGS84.shp)
Click CSR button and select the test layer projection (WGS 84 / EPSG:4326)
Click OK
Check the new shape file has correct projection :
cat DEPARTEMENT_WGS84.prj
GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",SPHEROID["WGS_1984",6378137,298.257223563]],PRIMEM["Greenwich",0],UNIT["Degree",0.017453292519943295]]
Now your script should work fine using new shape file.