In my Logstash config file I have many jdbc inputs and all of them use the same database and the same credentials. Each time I want to change for example connection string I have to loop through all jdbc inputs manually. Can I somehow define variables once and then use them in the config file for example like that?
CONNECTION_STRING_VARIABLE => "MY_CONNECTION_STRING"
jdbc {
...
jdbc_connection_string => CONNECTION_STRING_VARIABLE
...
}
I don't want to use environmental variables because of user and password fields and I want to store variables in one place.
For passwords, you may use property jdbc_password_filepath.
I read that you don't want to set environment variables. I am giving a way here, so that, it will not be available in environment of every shell but will get loaded for logstash only.
You may create a script that exports all variables for logstash and call it in logstash service or logstash command line.
For example, create a file - exportVariablesForLogstash.sh
export jdbc_url="jdbc:mysql://example.local:3306/sampledb"
export jdbc_username=mysqluser
Add following in starting of logstash service or logstash commandline script. Note dot at starting.
. exportVariablesForLogstash.sh
And then you may use these variables as documented here. https://www.elastic.co/guide/en/logstash/current/environment-variables.html. I believe you already know this one.
Related
Is it possible within an Ansible host file to refer to hosts within a group like my provided sample or is there any other way of doing it directly in the host file? (I would prefer not to change existing playbooks or use limit flags)
# Handled by terraform with company policies (can't change this)
[web]
direct-15-67-156-6.bdb.company.com
direct-12-67-116-124.lia.company.com
[lb]
direct-12-68-117-13.osp.company.com
# BEGIN ANSIBLE MANAGED BLOCK
[mywebsite]
web[0]
[gatling]
web[1]
You can't do anything like you suggest with a static hosts file, however the static hosts file can be replaced with a script, and then you are free to use whatever logic you wish to build out your groups and hosts. Perhaps you can persuade Terraform to produce the data in a JSON file (or similar) that your inventory script could then consume?
Docs are here.
I use ~/.ssh/config to manage hosts that I need to interact with frequently. Sometimes I would like to access the aliases from that file in scripts that do not use ssh directly, ie if I have a .ssh/config with
Host host1
User user1
Hostname server1.somewhere.net
I would like to be able to say something like sshcfg['host1'].Hostname to get server1.somewhere.net in scripting languages, particularly Python and something sensible in Bash.
I would prefer to do this with standard tools/libraries if possible. I would also prefer the tools to autodetect the the current configuration from the environment rather than have to be explicitly pointed at a configuration file. I do not know if there is a way to have alternate configs in ssh but if there is I would like a way to autodetect the currently active one. Otherwise just defaulting to "~/.ssh/config" would do.
It is possible to specify an alternative configuration file on the command line, but ~/.ssh/config is always the default. For an alternative configuration file use the -F configfile option.
You can either try parsing the original config file, or have a file that is better suited for manipulation and generate an alternative configuration file or part of the default configuration file from that.
Using Paramiko
Paramiko is a pure-Python (3.6+) implementation of the SSHv2 protocol, providing both client and server functionality.
You may not need every feature Paramiko is providing, but based on their documentation, this module could do just what you need: paramiko.config.SSHConfig
Representation of config information as stored in the format used by OpenSSH. Queries can be made via lookup. The format is described in OpenSSH’s ssh_config man page. This class is provided primarily as a convenience to posix users (since the OpenSSH format is a de-facto standard on posix) but should work fine on Windows too.
You can define a non-standard config-file,the same way you would specify an alternate ssh-config file to ssh command line, as mentioned in a previous answer:
config = SSHConfig.from_file(open("some-path.config"))
# Or more directly:
config = SSHConfig.from_path("some-path.config")
# Or if you have arbitrary ssh_config text from some other source:
config = SSHConfig.from_text("Host foo\n\tUser bar")
You can then retrieve whichever configuration you need like so
For example with a configuration file like this:
Host foo.example.com
PasswordAuthentication no
Compression yes
ServerAliveInterval 60
You could access different settings using:
my_config = SSHConfig()
my_config.parse(open('~/.ssh/config'))
conf = my_config.lookup('foo.example.com')
assert conf['passwordauthentication'] == 'no'
assert conf.as_bool('passwordauthentication') is False
assert conf['compression'] == 'yes'
assert conf.as_bool('compression') is True
assert conf['serveraliveinterval'] == '60'
assert conf.as_int('serveraliveinterval') == 60
In case you only need available hostnames, you can list all available ones, and then get information about their configuration using that context (More info here)
get_hostnames()
Return the set of literal hostnames defined in the SSH config (both explicit hostnames and wildcard entries).
This might not be compliant with your request for a standard library, though the project seems to be actively maintained and widely used as well
I have a scenario where i want to process csv file and load to someother database:
Cases
pic csv file and load to mysql with the same name as csv
then do some modification on loaded rows using python task file
after that extract data from mysql and load to some other database
CSV files are coming from remote server to one airflow server in a folder.
We have to pick these csv file and process through python script.
Suppose i pick one csv file then i need to pass this csv file to rest of the operator in a dependency manner like
filename : abc.csv
task1 >> task2 >> task3 >>task4
So abc.csv should be available for all the task.
Please tell how to proceed.
Your scenarios don't have anything to do with realtime. This is ingesting on a schedule/interval. Or perhaps you could use a SensorTask Operator t detect data availability.
Implement each of your requirements as functions and call them from operator instances.
Add the operators to a DAG with a schedule appropriate for your incoming feed.
How you pass and access params is
-kwargs python_callable when initing an operator
-context['param_key'] in execute method when extending an operator
-jinja templates
relevant...
airflow pass parameter from cli
execution_date in airflow: need to access as a variable
The way tasks communicate in Airflow is using XCOM, but it is meant for small values, not for file content.
If you want your tasks to work with the same csv file you should save it on some location and then pass in the XCOM the path to this location.
We are using the LocalExecutor, so the local file system is fine for us.
We decided to create a folder for each dag with the name of the dag. Inside that folder we generate a folder for each execution date (we do this in the first task, that we always call start_task). Then we pass the path of this folder to the subsequent tasks via Xcom.
Example code for the start_task:
def start(share_path, **context):
execution_date_as_string = context['execution_date'].strftime(DATE_FORMAT)
execution_folder_path = os.path.join(share_path, 'my_dag_name', execution_date_as_string)
_create_folder_delete_if_exists(execution_folder_path)
task_instance = context['task_instance']
task_instance.xcom_push(key="execution_folder_path", value=execution_folder_path)
start_task = PythonOperator(
task_id='start_task',
provide_context=True,
python_callable=start,
op_args=[share_path],
dag=dag
)
The share_path is the base directory for all dags, we keep it in the Airflow variables.
Subsequent tasks can get the execution folder with:
execution_folder_path = task_instance.xcom_pull(task_ids='start_task', key='execution_folder_path')
Okay I've been having an issue with writing results to folders in JMeter.
I have set 2 variables, one for the name of the test and one for the submit date. I want the reports to be written to the folder named with these two variables.
Here's the variables:
TestRun = "Name of test"
DateRun = $__{time(dd-MMM-yyyy HH.mm.ss)}
The path of the folder to be written to looks like this:
C:\Tests\TestEnvironment\Results\\${TestRun}${DateRun}\file.csv
When I run it on the master machine, it's fine. It saves to the correct file and folder path, and ends up looking something like this:
C:\Tests\TestEnvironment\Results\Test Run 1 - 08-May-2014 08.55.47\file.csv
However, when I run it on remote machines, it saves it literally as below:
C:\Tests\TestEnvironment\Results\${TestRun}${DateRun}\file.csv
So I end up with a folder named "${TestRun}${DateRun}"
Am I missing something blindingly obvious, or is this an actual JMeter issue?
Thanks!
As per JMeter help:
-G, --globalproperty <argument>=<value>
Define Global properties (sent to servers)
e.g. -Gport=123
or -Gglobal.properties
You need to use -G key so your variables could be distributed across remote clients.
so something like:
jmeter -r -n GTestRun=SomeName -GDateRun=SomeTime -t /path/to/your/plan
should help.
Alternatively you can create a .properties file and pass it to remote JMeter Engines via the same "-G" option.
I expect that if you want to use JMeter __time() function you'll need to wrap it with __eval, elsewise it will be treated as a string. Alternatively you can use operating system commands to retrieve current date and time.
See Apache JMeter Properties Customization Guide for more information on dealing with JMeter Properties.
I am setting a Jmeter global property on the command line with the -G option. I try to use this property to alter the file name of a Simple Data Writer. However, In the data writer the __P function returns only the default.
jmeter -t ... --nongui ... -GFileName=MyFile.xml ...
So, I know that I am setting the global property correctly. Both the jmeter log and the Jmeter server log show that the value is being captured from the command line. However it still refuses write a file name with anything other than the default.
I use the following command
filename_${__P(FileName,Default.fl)}
How do I pass in a value at the command line so that I can use it as the file name for a Simple Data Writer?
Notes: I am using remote servers, so I must use -G, and I already have a primary data file output, so I cannot use -l .
Why not to use -J or -D directives to set your property?
Everything will work as you want in case of
-JFileName=MyFile.xml
or
-DFileName=MyFile.xml
In both the cases you can than further refer to this property in Simple Data Writer as ${__P(FileName,)}.
Well, I've got the same negative result as your while trying to use global (-G) property but I cannot find in your situation described any prerequisites to use global (-G) properties instead of local (-J) or system (-D) ones.
Global properties are defined to be sent to remote servers... are you executing test in client-server mode (jmeter-server started)?
Than, as per 18.3.9 Simple Data Writer
When running in non-GUI mode, the -l flag can be used to create a data file.
I.e. running
jmeter -n -t ... -l MyFile.xml
will give you the same result in MyFile.xml.
As additional note.
You can try to use JMeterPlugins solutions:
Flexible File Writer - instead of native Simple Data Writer.