I'd like to know how I could read a YAML file from a Jython script.
I have created a Jython script calling Websphere Application Server commands to create several datasources, virtual hosts, name space bindings etc.
However, for now values are hard coded in the script, and lots of Jython code is duplicated because using arrays is not convenient.
Ideally I'd like to have something like this, in an external file, read by the Jython script:
Cell
cellName: Cell01
JAAS
"alias1"
aliasName: "j2cALiasA"
aliasDesc: "First j2cAlias"
"alias2"
aliasName: "j2cALiasB"
aliasDesc: "Second j2cAlias"
Node:
nodeName: Node01
JAASAuthData:
jdbcProviderType: ...
Server
serverName: server-1
datasources
"datasource1"
datasourceName: "jdbc/datasource1"
datasourceAuthDataAlias:
And loop over those different objects (I am not sure about the YAML syntax here but it's just for the example)
How could I do that? Is there a YAML parser for Jython? I can't find anything.
If you have other suggestions about externalizing configuration for WAS Admin Jython scripts it will also be useful :)
SOLUTION
For WAS 8.5 I had to switch to Jython 2.7 by using a Thin client that I created with this procedure: http://www.ibm.com/developerworks/websphere/library/techarticles/1207_vansickel/1207_vansickel.html.
Then I had to manually download PyYAML-3.11 package and edit its setup.py because otherwise you get this error http://pyyaml.org/ticket/163. So I've just used this:
def ext_status(self, ext):
return False
And then installed the package from the archive:
<THIN_CLIENT_HOME>/lib/jython/bin/pip install /root/PyYAML-3.11.tar.gz
And you execute the jython script like this:
./thinClient.sh -port 9809 -host websphere-1 -f /root/yaml.py
Your data is not really YAML, there are a few colons missing and a few unnecessary quotes:
Cell:
cellName: Cell01
JAAS:
alias1:
aliasName: j2cALiasA
aliasDesc: First j2cAlias
alias2:
aliasName: j2cALiasB
aliasDesc: Second j2cAlias
Node:
nodeName: Node01
JAASAuthData:
jdbcProviderType: ...
Server:
serverName: server-1
datasources:
datasource1:
datasourceName: jdbc/datasource1
datasourceAuthDataAlias:
Put it that way, it properly parses/loads under Jython 2.7.0 on Linux, with ruamel.yaml (disclaimer: I am the author of that package). You can install that package with pip install ruamel.yaml).
Related
I want to deploy helm chart using ansible-playbook,
my command looks like this:
helm install istio-operator manifests/charts/istio-operator --set operatorNamespace=istio-operator
however I could not find the equivalent for the --set arguments in the ansible plugin.
The bad news is the documentation fails to document the values: parameter, but one can see its use in the Examples section
- community.kubernetes.helm:
name: istio-operator
chart_ref: manifests/charts/istio-operator
values:
operatorNamespace: istio-operator
If for some reason that doesn't work, using --set is (plus or minus) the same as putting that key-value pair in a yaml file and then calling --values $the_filename, so you'd want to do that same operation only manually: create the file on the target machine (not the controller), then invoke c...k...helm: with the documented values_files: pointed at that newly created yaml file
https://cloud.google.com/deployment-manager/docs/configuration/templates/create-basic-template
I can deploy a template directly like this: gcloud deployment-manager deployments create a-single-vm --template vm_template.jinja
But what if that template depends on other files that need to be imported? If using a --config file you can define import in that file and call the template as a resource. But you cant pass parameter/properties to a config file. I want to call a template directly to pass --properties via the command line but that template also needs to import other files.
EDIT: What I needed was a top level jinja template instead of a config. My confusion was that you cant use imports in a jinja template without a schema file- it was failing and I thought it wasnt supported. So the solution was just swap out the config with a jinja template (with schema file) and then I can use --properies
Maybe you can try importing the dependent files into your config file as follows:
imports:
- path: vm-template.jinja
- path: vm-template-2.jinja
# In the resources section below, the properties of the resources are replaced
# with the names of the templates.
resources:
- name: vm-1
type: vm-template.jinja
- name: vm-2
type: vm-template-2.jinja
and Set Arbitrary Metadata insito create a special variable that you can pass and might use in other applications outside of Deployment Manager:
properties:
size:
type: integer
default: 2
description: Number of Mongo Slaves
variable-x: ultra-secret-sauce
More info about gcloud deployment-manager deployments create optional flags and example can be found here.
More info about passing properties using a Schema can be found here
Hope it helps
I had an issue with an eyaml file used to store password for DB connection and it seems that I missed a "[".
I want to know if there is a command or script to check eyaml syntax
One thing you can do, if you have python installed somewhere, is install ruamel.yaml (disclaimer: I am the author of that package) and run the following:
python check.py your_eyaml_file
with check.py being:
import sys
from ruamel.yaml import YAML
yaml = YAML()
yaml.load(sys.argv[1])
This will do a safe load of your YAML file and will throw an error if your file doesn't conform to the YAML specification.
There are also online parsers when you can run such checks, but I would not want to use them with sensitive information (encrypted or not).
I am debugging creation of a custom AMI and it's not clear to me how EC2 actually installs the public key of your keypair onto your AMI... I presume it goes into ~someuser/.ssh/authorized_keys, but I cannot figure out if this is done exactly once, on every boot, or how the target user is determined.
More specifically cloud-init is a Python module that gets run every time an instance starts.
You can browse through the code here:
/usr/lib/python2.7/dist-packages/cloudinit
They parts that get the key are the DataSource.py and DataSourceEc2.py files. They query the metadata using the URL: http://169.254.169.254/2011-01-01/meta-data/public-keys/.
The find the list of keys using that URL and then pick them up one of by one. (It's usually one). Ultimately they query: http://169.254.169.254/2011-01-01/meta-data/public-keys/0/openssh-key/ then they copy that key to the default cloud-init user's ~/.ssh/authorized_keys file.
The default cloud-init user (as well as all the cloud-init config) is defined in the /etc/cloud/cloud.cfg file. This in excerpt of a cloud.cfg file:
user: ubuntu
disable_root: 1
preserve_hostname: False
# datasource_list: ["NoCloud", "ConfigDrive", "OVF", "MAAS", "Ec2", "CloudStack"]
cloud_init_modules:
- bootcmd
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- ca-certs
- rsyslog
- ssh
cloud_config_modules:
- disk-setup
- mounts
- ssh-import-id
- locale
- set-passwords
- grub-dpkg
...
It's basically a yaml format config file.
For more information on cloud-init you can read their public docs here:
http://cloudinit.readthedocs.org/en/latest/index.html
Hope this helps.
I'm writing an Apache module and want to get a string with the Apache name version and other details. Much like what gets added to outgoing headers, e.g.:
Server: Apache/2.2.13 (Win32)
I've tried code like this:
apr_table_get(request_rec->headers_out,"Server")
But that doesn't seem to work. Is there an API call I haven't found or am I doomed to get version resource data from httpd.exe?
try this command
apache2 -v
should print out something like this
Server version: Apache/2.2.11 (Ubuntu)
Server built: Mar 9 2010 21:05:51
most unix commands have a -v option
it looks like you are trying to get it from php, the exec command in php will let you run the command on the server
Found it: ap_get_server_version, my HTTPD2 API wrapper was missing this declaration
I'm not sure about Apache modules, but for CGI scripts, the name of the current web server is stored in the SERVER_SOFTWARE environment variable. In Perl, for example, you would use $ENV{SERVER_SOFTWARE} to read it. In C you would use getenv ("SERVER_SOFTWARE").
In order to find out the server software, why not just grep through the Apache source codes to find where this is defined.
Doing this with Apache 1.3.41, I find that it is defined in a file called util_script.c on line 240 as follows:
ap_table_addn(e, "SERVER_SOFTWARE", ap_get_server_version());
It looks like there is a function called ap_get_server_version which returns the value as a string.