I want to use a custom font to generate pdf with gofpdf. Here is what I'm using :
fontPath := filepath.Join(cfg.Path, "assets", "font", "oxygen.ttf")
log.Println(fontPath)
doc.AddUTF8Font("oxygen", "", fontPath)
doc.SetFont("oxygen", "", 12)
/home/username/myapp/assets/font/oxygen.ttf
However pdf generations throws this error which I struggle to understand :
stat home/username/myapp/assets/font/oxygen.ttf: no such file or directory
Eventually I did check the stat myself through :
stat /home/username/myapp/assets/font/oxygen.ttf
The file exists properly and stats displays its infos. But it's like gofpdf ignores the initial slash (based on the error thrown). How to refer to the file URL properly ?
solved
Should have read the doc more accurately. Font dir has to be specified when creating the fpdf doc. To me it was :
gofpdf.New(orientation, "mm", "A4", "")
to
gofpdf.New(orientation, "mm", "A4", filepath.Join(cfg.Path, "assets", "font"))
then we can use :
doc.AddUTF8Font("oxygen", "", "oxygen.ttf")
Related
I am using text to speech (tts) to create an audio file in .opus format. Since I have multiple locales like US and UK, I need to copy the audio file generated to UK locale but with specific metadata that can tell the difference from US and UK files.
I am able to add the locale tag with following code:
return createOpus(data.wavFile, data.opusFile, {
bitrate: 24,
padding: 0,
tags: { locale: US },
}).then(() => data);
I would like to modify the locale attribute to "--comment locale=UK"
The code I use to copy the file is below:
"ditto " + data.opusFile + " " + UKPath;
If we can't modify the existing metadata, can we add a new attribute?
Thanks in advance!
I'm trying to inspect a CSV file and there are no findings being returned (I'm using the EMAIL_ADDRESS info type and the addresses I'm using are coming up with positive hits here: https://cloud.google.com/dlp/demo/#!/). I'm sending the CSV file into inspect_content with a byte_item as follows:
byte_item: {
type: :CSV,
data: File.open('/xxxxx/dlptest.csv', 'r').read
}
In looking at the supported file types, it looks like CSV/TSV files are inspected via Structured Parsing.
For CSV/TSV does that mean one can't just sent in the file, and needs to use the table attribute instead of byte_item as per https://cloud.google.com/dlp/docs/inspecting-structured-text?
What about for XSLX files for example? They're an unspecified file type so I tried with a configuration like so, but it still returned no findings:
byte_item: {
type: :BYTES_TYPE_UNSPECIFIED,
data: File.open('/xxxxx/dlptest.xlsx', 'rb').read
}
I'm able to do inspection and redaction with images and text fine, but having a bit of a problem with other file types. Any ideas/suggestions welcome! Thanks!
Edit: The contents of the CSV in question:
$ cat ~/Downloads/dlptest.csv
dylans#gmail.com,anotehu,steve#example.com
blah blah,anoteuh,
aonteuh,
$ file ~/Downloads/dlptest.csv
~/Downloads/dlptest.csv: ASCII text, with CRLF line terminators
The full request:
parent = "projects/xxxxxxxx/global"
inspect_config = {
info_types: [{name: "EMAIL_ADDRESS"}],
min_likelihood: :POSSIBLE,
limits: { max_findings_per_request: 0 },
include_quote: true
}
request = {
parent: parent,
inspect_config: inspect_config,
item: {
byte_item: {
type: :CSV,
data: File.open('/xxxxx/dlptest.csv', 'r').read
}
}
}
dlp = Google::Cloud::Dlp.dlp_service
response = dlp.inspect_content(request)
The CSV file I was testing with was something I created using Google Sheets and exported as a CSV, however, the file showed locally as a "text/plain; charset=us-ascii". I downloaded a CSV off the internet and it had a mime of "text/csv; charset=utf-8". This is the one that worked. So it looks like my issue was specifically due the file being an incorrect mime type.
xlsx is not yet supported. Coming soon. (Maybe that part of the question should be split out from the CSV debugging issue.)
I am reading an image from the local file system, converting it to bytes format and finally ingesting the image to tf.train.Feature to convert into TFRecord format. Things are working fine until the moment I read the TFrecord and extract the image bytes format which seems to be a sparse format in the end. Below is my code for the complete process flow.
reading df and image file: No Error
import tensorflow as tf
from PIL import Image
img_bytes_list = []
for img_path in df.filepath:
with tf.io.gfile.GFile(img_path, "rb") as f:
raw_img = f.read()
img_bytes_list.append(raw_img)
defining features : No Error
write_features = {'filename': tf.train.Feature(bytes_list=tf.train.BytesList(value=df['filename'].apply(lambda x: x.encode("utf-8")))),
'img_arr':tf.train.Feature(bytes_list=tf.train.BytesList(value=img_bytes_list)),
'width': tf.train.Feature(int64_list=tf.train.Int64List(value=df['width'])),
'height': tf.train.Feature(int64_list=tf.train.Int64List(value=df['height'])),
'img_class': tf.train.Feature(bytes_list=tf.train.BytesList(value=df['class'].apply(lambda x: x.encode("utf-8")))),
'xmin': tf.train.Feature(int64_list=tf.train.Int64List(value=df['xmin'])),
'ymin': tf.train.Feature(int64_list=tf.train.Int64List(value=df['ymin'])),
'xmax': tf.train.Feature(int64_list=tf.train.Int64List(value=df['xmax'])),
'ymax': tf.train.Feature(int64_list=tf.train.Int64List(value=df['ymax']))}
create example: No Error
example = tf.train.Example(features=tf.train.Features(feature=write_features))
writing data in TfRecord Format: No Error
with tf.io.TFRecordWriter('image_data_tfr') as writer:
writer.write(example.SerializeToString())
Read and print data: No Error
read_features = {"filename": tf.io.VarLenFeature(dtype=tf.string),
"img_arr": tf.io.VarLenFeature(dtype=tf.string),
"width": tf.io.VarLenFeature(dtype=tf.int64),
"height": tf.io.VarLenFeature(dtype=tf.int64),
"class": tf.io.VarLenFeature(dtype=tf.string),
"xmin": tf.io.VarLenFeature(dtype=tf.int64),
"ymin": tf.io.VarLenFeature(dtype=tf.int64),
"xmax": tf.io.VarLenFeature(dtype=tf.int64),
"ymax": tf.io.VarLenFeature(dtype=tf.int64)}
reading single example from tfrecords format: No Error
for serialized_example in tf.data.TFRecordDataset(["image_data_tfr"]):
parsed_s_example = tf.io.parse_single_example(serialized=serialized_example,
features=read_features)
reading image data from tfrecords format: No Error
image_raw = parsed_s_example['img_arr']
encoded_jpg_io = io.BytesIO(image_raw)
Here it is giving error: TypeError: a bytes-like object is required, not 'SparseTensor'
image = Image.open(encoded_jpg_io)
width, height = image.size
print(width, height)
Please tell me what changes are required at the input of "image_arr" so that it will not generate sparse tensor and return a byte format ?
Is there anything that I can do to optimize my existing code?
I want VSCode on my Mac to use 4 spaces instead of 2 when I select Format Document. This is what I have on my User Settings:
{
"editor.fontFamily": "Andale Mono",
"editor.fontSize": 13,
"editor.renderWhitespace": "all",
"editor.tabSize": 4,
"[dart]": {
"editor.tabSize": 4,
"editor.detectIndentation": false
},
"workbench.colorTheme": "Material Theme",
"materialTheme.fixIconsRunning": false,
"workbench.iconTheme": "eq-material-theme-icons"
}
However when I format the document, it does not respect the 4 spaces tab. It uses 2.
This is a limitation of the Dart plugin for VS Code. It uses the official dart_style formatter which only supports formatting with spaces (the same as running dartfmt).
If you'd like to see a more flexible formatter, please put a ThumbsUp on this GitHub issue:
https://github.com/Dart-Code/Dart-Code/issues/914
i'm newbie in javascript and i want to try to visualize data using javascript especially d3.js. I found example graph what i want to build in nvd3.js (http://nvd3.org/examples/linePlusBar.html) this is line and bar chart combine in one place,,i try to modified it to same like (http://www.highcharts.com/demo/combo-multi-axes), but i still can not do that.
My question is, how i can put more lines in in Line Plus Bar Chart using nvd3.js ?
Thanks :)
When you draw a Line Plus Bar Chart using nvd3.js, in the JSON you pass into the chart make sure, you add an attribute "bar" : true , to represent that particular data in Bars, the rest will load as line charts.
A sample JSON that's passed into the chart will look this :
[{
"key" : "Bar Chart",
"bar" : true,
"color" : "#ccf",
"values" : [[1136005200000, 1271000.0], [1138683600000, 1271000.0], [1141102800000, 1271000.0], [1143781200000, 0], [1146369600000, 0]]
}, {
"key" : "Line Chart1",
"color" : "#c2f",
"values" : [[1136005200000, 71.89], [1138683600000, 75.51], [1141102800000, 68.49], [1143781200000, 62.72], [1146369600000, 70.39]]
}, {
"key" : "Line Chart2",
"color" : "#cff",
"values" : [[1136005200000, 89], [1138683600000, 51], [1141102800000, 49], [1143781200000, 72], [1146369600000, 39]]
}]
UPDATE
By looking at your fiddle here , I found the following in the console
- Refused to execute script from 'https://raw.githubusercontent.com/novus/nvd3/master/lib/d3.v3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled.
- Refused to execute script from 'https://raw.githubusercontent.com/novus/nvd3/master/nv.d3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled.
- Refused to execute script from 'https://raw.githubusercontent.com/novus/nvd3/master/src/models/linePlusBarChart.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled.
Basically it had difficulty loading the d3.js and nvd3.js, I updated the fiddle here with new links to the js files and it seems to work fine.
Hope it helps