nggettext_compile does not extract any strings from my .PO file - angular-gettext

I have a file po/el.po containing translations:
msgid ""
msgstr ""
"Project-Id-Version: \n"
"PO-Revision-Date: 2015-07-01 10:49+0000\n"
"Language: el\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=2; plural=n != 1;\n"
"X-Generator: Weblate 2.2\n"
#: static/src/partials/layout/header.html:17
#, fuzzy
msgid "About"
msgstr "σχετικά με"
#: static/src/partials/layout/header.html:19
#, fuzzy
msgid "Admin"
msgstr "διαχειριστής"
[[SNIP]]
I've taken the Gruntfile config straight from the documentation.
nggettext_compile: {
all: {
files: {
'static/src/js/app/translations.js': [
'po/*.po'
]
}
}
}
grunt --verbose shows that it finds po/el.po, and creates static/src/js/app/translations.js
Running "nggettext_compile:all" (nggettext_compile) task
Verifying property nggettext_compile.all exists in config...OK
Files: po/el.po -> static/src/js/app/translations.js
Options: (none)
Reading po/el.po...OK
Writing static/src/js/app/translations.js...OK
Done, without errors.
However, the resulting static/src/js/app/translations.js contains an empty list of translations:
angular.module('gettext').run(['gettextCatalog', function (gettextCatalog) {
/* jshint -W100 */
gettextCatalog.setStrings('el', {});
/* jshint +W100 */
}]);
What am I missing?

nggettext_compile does not include translations flagged as "fuzzy", and since I'm just setting this up ready for our translators, all the translations are machine-generated, and hence flagged as "fuzzy"...

Related

aws_imagebuilder_component issue with template_file

I need to deploy a bash script using aws_imagebuilder_component resource; I am using a template_file to populate an array inside my script. I am trying to figure out the correct way to render the template inside the imagebuilder_component resource.
I am pretty sure that the formatting of my script is the main issue :(
This is the error that I keep getting, seems like an issue with the way I am formatting the script inside the yml, can you please assist? I have not worked with image builder previously or with yamlencode.
Error: error creating Image Builder Component: InvalidParameterValueException: The value supplied for parameter 'data' is not valid. Failed to parse document. Error: line 1: cannot unmarshal string phases:... into Document.
# Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}
# template_file
data "template_file" "init" {
template = "${file("${path.module}/myscript.yml")}"
vars = {
devnames= join(" ", local.devnames)
}
}
# myscript.yml
schemaVersion: 1.0
phases:
- name: build
steps:
- name: pre-requirements
action: ExecuteBash
inputs:
commands: |
#!/bin/bash
for i in `seq 0 6`; do
nvdev="/dev/nvm${i}n1"
if [ -e $nvdev ]; then
mapdev="${devnames[i]}"
if [[ -z "$mapdev" ]]; then
mapdev="${devnames[i]}"
fi
else
ln -s $nvdev $mapdev
echo "symlink created: ${nvdev} to ${mapdev}"
fi
fi
done
#Marko
# tfvars
vols = {
data01 = {
devname = "/dev/xvde"
size = "200"
}
data01 = {
devname = "/dev/xvdf"
size = "300"
}
}
Variables.tf: populating this list "devnames" from the map object "vols", shown above
locals {
devnames = ([ for key, value in var.vols: value.devname ])
}
main: template_file is using the list "devnames" to assign its values to devnames variable; devnames variables is used inside myscript.yml
devnames = join(" ", local.devnames)
At this point, everything is working w/o issues
But when this is executed, it fails and complains about the formatting of the template that was rendered using myscript.yml.
I am doing something wrong here that I cannot figure it out
## Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}

YAML indenting with neovim and treesitter?

I've recently upgraded to neovim 0.5.0, and I've been experimenting at replacing older syntax and indenting plugins with treesitter. I'm having some problems getting things to work correctly when editing YAML files.
I have the following in my init.lua file:
local ts = require 'nvim-treesitter.configs'
ts.setup {ensure_installed = 'maintained',
highlight = {
enable = true,
additional_vim_regex_highlighting = false,
},
indent = {
enable = true,
disable = {"python", }
},
}
Running :checkhealth reports
health#nvim_treesitter#check
========================================================================
[...]
## Parser/Features H L F I J
[...]
- yaml ✓ ✓ ✓ ✓ ✓
But when I create a YAML file, for example...
- hosts: foo<RETURN>
...then the cursor ends up at column 0 on the following line, rather
than indented as required. This behaviors persists for the rest of the
file: regardless of the YAML syntax, the cursor always goes to column 0
on return
I know that treesitter indent support is considered "experimental". Is
this just broken right now, or do I have something misconfigured?
Looks like the YAML parser's indentations are pretty rudimentary: https://github.com/nvim-treesitter/nvim-treesitter/blob/master/queries/yaml/indents.scm
You may have a better development experience by just disabling tree-sitter indentation for just yaml and using the default Vim regex indentation instead.
In your nvim-treesitter config
require('nvim-treesitter.configs').setup {
indent = {
enable = true,
disable = { 'yaml' }
}
}

Triggering Lambda on s3 video upload?

I am testing adding a watermark to a video once uploaded. I am running into an issue where lamdba wants me to specify which file to change on upload. but i want it to trigger when any (really, any file that ends in .mov, .mp4, etc.) file is uploaded.
To clarify, this all works manually in creating a pipeline and job.
Here's my code:
require 'json'
require 'aws-sdk-elastictranscoder'
def lambda_handler(event:, context:)
client = Aws::ElasticTranscoder::Client.new(region: 'us-east-1')
resp = client.create_job({
pipeline_id: "15521341241243938210-qevnz1", # required
input: {
key: File, #this is where my issue
},
output: {
key: "CBtTw1XLWA6VSGV8nb62gkzY",
# thumbnail_pattern: "ThumbnailPattern",
# thumbnail_encryption: {
# mode: "EncryptionMode",
# key: "Base64EncodedString",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
# rotate: "Rotate",
preset_id: "1351620000001-000001",
# segment_duration: "FloatString",
watermarks: [
{
preset_watermark_id: "TopRight",
input_key: "uploads/2354n.jpg",
# encryption: {
# mode: "EncryptionMode",
# key: "zk89kg4qpFgypV2fr9rH61Ng",
# key_md_5: "Base64EncodedString",
# initialization_vector: "ZeroTo255String",
# },
},
],
}
})
end
How do i specify just any file that is uploaded, or files that are a specific format? for the input: key: ?
Now, my issue is that i am using active storage so it doesn't end in .jpg or .mov, etc., it just is a random generated string (they have reasons for doing this). I am trying to find a reason to use active storage and this is my final step to making it work like other alternatives before it.
The extension field is Optional. If you don't specify anything in it, the lambda will be triggered no matter what file is uploaded. You can then check if it's the type of file you want and proceed.

asciidoctor not run (or not running properly) in gradle

I am having trouble getting the asciidoctor to run in gradle. The script is below.
asciidoctor {
logger.lifecycle("Processing from $sourceDir to $asciiDocOutputDir")
sources {
include 'index.adoc'
}
backends = ['html5']
attributes = [
doctype: 'book',
toc: 'left',
toclevels: '3',
numbered: '',
sectlinks: '',
sectanchors: '',
hardbreaks: '',
generated: asciiDocOutputDir
]
}
Here is the printout from the debug line to show the directories:
Processing from C:\Users\xxx\eclipse-workspace\springfox\src\docs\asciidoc to C:\Users\xxx\eclipse-workspace\springfox\build\asciidoc\generated
With the above logging line printed, does it mean the asciidoctor task was run? I don't see error and the index.html was not created (or at least I could not find it in the folder I expect to see it).
I have an index.adoc file under:
C:\Users\xxx\eclipse-workspace\springfox\src\docs\asciidoc\index.adoc with this content:
include::{generated}/overview.adoc[]
include::{generated}/paths.adoc[]
include::{generated}/security.adoc[]
include::{generated}/definitions.adoc[]
The 4 .adoc files are available in the following folder with contents:
C:\Users\xxx\eclipse-workspace\springfox\build\asciidoc\generated
But I could not get the index.html created :-(. There is got to be a silly mistake somewhere but I just could not figure out. What should I look for, or how can I debug this?
Thanks!

Why does puppet think my custom fact is a string?

I am trying to create a custom fact I can use as the value for a class parameter in a hiera yaml file.
I am using the openstack/puppet-keystone module and I want to use fernet-keys.
According to the comments in the module I can use this parameter.
# [*fernet_keys*]
# (Optional) Hash of Keystone fernet keys
# If you enable this parameter, make sure enable_fernet_setup is set to True.
# Example of valid value:
# fernet_keys:
# /etc/keystone/fernet-keys/0:
# content: c_aJfy6At9y-toNS9SF1NQMTSkSzQ-OBYeYulTqKsWU=
# /etc/keystone/fernet-keys/1:
# content: zx0hNG7CStxFz5KXZRsf7sE4lju0dLYvXdGDIKGcd7k=
# Puppet will create a file per key in $fernet_key_repository.
# Note: defaults to false so keystone-manage fernet_setup will be executed.
# Otherwise Puppet will manage keys with File resource.
# Defaults to false
So wrote this custom fact ...
[root#puppetmaster modules]# cat keystone_fernet/lib/facter/fernet_keys.rb
Facter.add(:fernet_keys) do
setcode do
fernet_keys = {}
puts ( 'Debug keyrepo is /etc/keystone/fernet-keys' )
Dir.glob('/etc/keystone/fernet-keys/*').each do |fernet_file|
data = File.read(fernet_file)
if data
content = {}
puts ( "Debug Key file #{fernet_file} contains #{data}" )
fernet_keys[fernet_file] = { 'content' => data }
end
end
fernet_keys
end
end
Then in my keystone.yaml file I have this line:
keystone::fernet_keys: '%{::fernet_keys}'
But when I run puppet agent -t on my node I get this error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, "{\"/etc/keystone/fernet-keys/1\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}, \"/etc/keystone/fernet-keys/0\"=>{\"content\"=>\"xxxxxxxxxxxxxxxxxxxx=\"}}" is not a Hash. It looks to be a String at /etc/puppetlabs/code/environments/production/modules/keystone/manifests/init.pp:1144:7 on node mgmt-01
I had assumed that I had formatted the hash correctly because facter -p fernet_keys output this on the agent:
{
/etc/keystone/fernet-keys/1 => {
content => "xxxxxxxxxxxxxxxxxxxx="
},
/etc/keystone/fernet-keys/0 => {
content => "xxxxxxxxxxxxxxxxxxxx="
}
}
The code in the keystone module looks like this (with line numbers)
1142
1143 if $fernet_keys {
1144 validate_hash($fernet_keys)
1145 create_resources('file', $fernet_keys, {
1146 'owner' => $keystone_user,
1147 'group' => $keystone_group,
1148 'subscribe' => 'Anchor[keystone::install::end]',
1149 }
1150 )
1151 } else {
Puppet does not necessarily think your fact value is a string -- it might do, if the client is set to stringify facts, but that's actually beside the point. The bottom line is that Hiera interpolation tokens don't work the way you think. Specifically:
Hiera can interpolate values of any of Puppet’s data types, but the
value will be converted to a string.
(Emphasis added.)

Resources