Converting logs to JSON format - go

klog.Info("kube config file loaded ")
is there a way to convert logs to JSON format in Go?
I tried ,
klog.info(json.Marshal("kube config file loaded"))
but did not got output in json.help me out

Klog doesnt support json formatting

Related

Mulesoft SFTP write .csv file encoded UTF-8 with BOM

I have the requirement to store a csv file with the SFTP write processor from Mulesoft on an SFTP server.
File format: CSV (comma as separator), UTF-8 with BOM
In a "transform message" I transform the JSON payload into application / csv encoding = "UTF-8"
That works great, the csv is then available on the SFTP server in utf-8 format.
%dw 2.0
output application/csv encoding="UTF8"
---
payload.data
My problem is how can I attach the BOM to the file?
Before you write the data to SFTP create a variable called utf8BOM using the dataweave below:
%dw 2.0
import * from dw::core::Binaries
output application/java
---
fromHex("EFBBBF")
Then create your new file using a write connector and write the variable above first: e.g. in the content put
vars.utf8BOM
Then write your content to the same file using append on the write connector.
This should output a UTF8 CSV file with the BOM bytes.

RL0 file format

What is a .rl0 file format and how to access its data?
I have been searching for RL0 file format but what I get is r10 and rlo file format and I am unable to get the data inside the file.
How to get the data inside the file?
It is in hex format you can use the following code.
import binascii
with open (final_name, "rb") as myfile:
header=myfile.read()
header = str(binascii.hexlify(header))[2:-1]
print(header)

Trying to parse my password details into yaml, Instead of entering it manually

I’m storing my password and connection strings in yaml file before. Now, I’m trying to store my password in local drive in ".txt" format and parse it to the yaml file. If anyone can help me in what way to approach in parsing would be helpful.
Below are the password details and connection strings used in yaml file which I created.
app_server_name: ************
app_server_username: $USERNAME$
app_server_password: ‘**********’ #trying to parse from local drive
app_yaml_file: /devl/galaxy/common/edwEnvironmentConfig.yml
environment_profile: /home/$USERNAME$/eproduct/difenv
teradata_logon_file: /home/$USERNAME$/.priv/devl/.td_connection
project_root_directory: /home/$USERNAME$/eproduct/
teradata_auth_type: ‘ldap’
tmode: ‘TERA’
What I Found:
http://ruby-doc.org/stdlib-1.9.3/libdoc/yaml/rdoc/YAML.html
Also using the psych gem:
https://docs.ruby-lang.org/en/2.5.0/Psych.html
I expect to parse my password details into yaml instead of manually entering it.

Is it possible to load a CSV file with Pandoc and producing markdown file for each line?

I have a CSV file similar to below:
0,Bob's Business,50 some address,zip,telephone
1,Jill's Business,25 some address,zip,telephone
...
I would like to take this CSV file and have Pandoc produce a markdown file for each line in the CSV file. Each column accessible from a variable to be used in a markdown template file.
Is it possible to load a CSV file and produce markdown/html files in this way?
I can see three ways.
Use a static site generator
I would probably just use a tool like jekyll with its data files.
Alternative 1: Convert to YAML and use pandoc's template engine
Put something like this in mytemplate.md:
$for(data)$
$data$
$endfor$
Convert the csv to a JSON or YAML file
load that file with the --metadata-file option and use the template to render the output:
echo '' | pandoc --metadata-file data.yaml -t markdown --template mytemplate.md -o output.md
Alternative 2: Write a pandoc filter
There are many pandoc filters (like pandoc-placetable or pantable) that read csv and convert it to a pandoc table. But you want to convert it to a pandoc metadata format (which is usually parsed from the YAML frontmatter of markdown files). I guess you could adjust one of those pandoc filters to your purposes.

Converting from .dat to netcdf(.nc)

I have a .dat file https://www.dropbox.com/s/8dbjg0i6l7a4sb6/CRUST10-xyz-complete.dat?dl=0 that I need to convert to either .grd or .nc, so that I can visualize the data in GMT(Generic Mapping Tool). I tried to do this using cdo using following command:
cdo -f nc import_binary CRUST10-xyz-complete.dat CRUST10-xyz-complete.nc
but got following error:
Open Error: Unknown keyword in description file
--> The invalid description file record is:
--> 0.5,89.5,4.19,0,2,0.7,0,0.73,1.62,5.01,14.25,10.06,7.36,2.7,1.5,3.81,2,3.5,0,5,6.5,7.1,8.07,5.5865805168986,6.7596467391304,2.3888888888889
The data file was not opened.
cdo import_binary (Abort): Open failed!
Can anyone please guide?
First make .ctl file then apply:-
cdo -f nc import_binary CRUST10-xyz-complete.ctl CRUST10-xyz-complete.nc
Here is the example link for to make .ctl file. http://cola.gmu.edu/grads/gadoc/descriptorfile.html
It will definitely work for you. Thanks!
Without the data itself, it is hard to see what went wrong. In any case, just look at the following message from cdo forum:
https://code.mpimet.mpg.de/boards/1/topics/3631
which you could use as an example of how to convert ASCII data to netCDF using cdo's.

Resources