How do I migrate a nifi 1.10.0 flow.xml.gz to 1.14 or newer versions: sensitive properties - apache-nifi

I have a dataflow running in NiFi 1.10.0, the relevant properties from this installation is here:
nifi.sensitive.props.key=
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=
I am trying to migrate the flowfile to the 1.15.2 install where the properties are
nifi.sensitive.props.key=<redacted>
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=NIFI_PBKDF2_AES_GCM_256
nifi.sensitive.props.additional.keys=
Found this section in the NiFi admin guide to help with the migration.
Has anyone done this, what command options did you use?
Also is this a two step process since I am going from a blank key to a non-empty one and also changing the algorithm at the same time?
I used this command and the conversion works fine when you don't change the algorithm. Basically just setting a key when it was not set in the earlier 1.10.0 install.
$ ./nifi-toolkit-1.15.2/bin/encrypt-config.sh -f /path/to/nifi/nifi-1.10.0/conf/flow.xml.gz -g /path/to/nifi/nifi-1.15.2/conf/flow.xml.gz -s new_password -n /path/to/nifi/nifi-1.10.0/conf/nifi.properties -o /path/to/nifi/nifi-1.15.2/conf/nifi.properties -x
How do you change the algorithm and set the key at the same time?
Thanks

Issue can be resolved by following steps
Before migration if you don't have nifi.sensitive.props.key set, set it using following command ${NIFI_TOOLKIT_PAT}/bin/encrypt-config.sh -f /opt/nifi/nifi-current/data/flow.xml.gz -p ${NIFI_HOME}/conf/nifi.properties -s <NEW_KEY_TO_SET> -x
Once key is set upgrade nifi. Since in newer version algorithm is changed set it using command ${NIFI_HOME}/bin/nifi.sh set-sensitive-properties-algorithm <NEW_ALGORITHM>
Once algorithm set, encrypt again using command ${NIFI_TOOLKIT_PAT}/bin/encrypt-config.sh -f /opt/nifi/nifi-current/data/flow.xml.gz -p ${NIFI_HOME}/conf/nifi.properties -s <NEW_KEY_TO_SET> -x
Now you will get all compatible files with respect your latest version

Related

Does setting sonar.timemachine.period1 from config or command line work?

I am attempting to set the leak period in SonarQube 5.6.5 using Sonar Scanner with a properties file or command line argument, but I am having no success.
I would like to set the sonar.timemachine.period1 property to a specific version, e.g., 1.0.0, as is mentinoned in solution 2 (using sonar-project.properties) or solution 3 (using command line paramter, e.g., -Dsonar.timemachine.period1=1.0.0) of the accepted answer for Sonar runner, seeonly newly introduced issues.
Here is the scenario that reproduces my issue.
Run analysis with sonar.projectVersion=1.0.0.
Run analysis with sonar.projectVersion=2.0.0 and sonar.timemachine.period1=1.0.0.
Run analysis with sonar.projectVersion=3.0.0 and sonar.timemachine.period1=1.0.0.
After the second and third analysis I would expect the leak period to be "since 1.0.0" (which is the behavior if I manually set the leak period in the SonarQube admin section to 1.0.0). Instead, for the third analysis the leak period is being set to 2.0.0.
What am I missing? Is this a bug?
Setting the sonar.timemachine.period1 via 'normal' property is not sufficient, you will need to set properties via REST Api - note you'll need admin rights.
Had the same problem when using the Sonarqube Ant Task and created a macrodef for that purpose.
See :
Sonarqube Wep API documentation
Sonarqube set leak period to specific version other than previous version
Rebse is correct, you must use the API to set the sonar.timemachine.period1 property. I'm not using Ant. I used a curl command from Bamboo.
curl
-X POST
-u MY_USERNAME:MY_PASSWORD
-d resource=MY_PROJECT_KEY
-d id=sonar.timemachine.period1
-d value=1.0.0
http://localhost:9000/api/properties

sasl mechanism using librdkafka in RHEM

how to configure sasl mechanism using kerberos in librdkafka library in RHEM OS.
I have already set:
WITH_SASL =y
Installed the libsasl2 package
Follow these steps:
I'm guessing you are on RHEL so make sure to install the following packages first: cyrus-sasl cyrus-sasl-devel cyrus-sasl-gssapi
Then run ./configure from the librdkafka directory, check the output and make sure it lists WITH_SASL y.
Run make and sudo make install
Find out what port the broker's SASL_PLAINTEXT listener is listening on by either asking your Kafka ops team or by looking at the listeners=.. configuration property in the broker's server.properties file.
Follow the steps outlined in this Wiki post to set up keytabs, etc: https://github.com/edenhill/librdkafka/wiki/Using-SASL-with-librdkafka
Verify that it works with one of the example programs, e.g: examples/rdkafka_example -b <broker>:<sasl_port> -L -d security -X security.protocol=SASL_PLAINTEXT -X sasl.kerberos.service.name=<service> -X sasl.kerberos.keytab=/path/to/clients.keytab -X sasl.kerberos.principle=<clientname>/<clienthost>
When you have it working with the example program, move the configuration properties into your program (rd_kafka_conf_set() et.al)
Also see the more detailed SASL documentation here:
http://docs.confluent.io/3.1.1/kafka/sasl.html

After setting authenticator: PasswordAuthenticator in Cassandra.yaml, Cassandra CQL Shell does not run

I'm new in Cassandra. I'm using Datastax Community edition and using only a single node in Windows 7. Trying to change my authentication, set authenticator value from AllowAllAuthenticator to PasswordAuthenticator in Cassandra.yaml. After that setting, it does not let me to run my Cassandra CQL Shell.
Q1. Why this is happening?
Q2. How to overcome it?
How are you accessing cqlsh? If you have the password authenticator activated, then you will need to specify the default Cassandra super user with the username and password flags.
Linux:
./cqlsh -u cassandra -p cassandra
In Windows, I'm going to guess that it's something like this:
cqlsh -u cassandra -p cassandra
Note that once you get in, you'll want to create your own superuser and disable the default cassandra account, as described here.
"I'm accessing cqlsh from START-> Datastax Community Edition-> Cassandra CQL Shell"
I wasn't aware that the Windows version now had a shortcut to cqlsh. Try modifying that shortcut's target (as shown here), and add -u cassandra -p cassandra to the end. I was able to get this to work by installing and modifying my shortcut's "target" property to this:
"E:\Program Files\DataStax Community\python\python.exe" "e:\Program Files\DataStax Community\apache-cassandra\bin\cqlsh" -u cassandra -p cassandra
Basically, put the -u and -p flags outside of the double quotes, and it should work.

Running Cassandra on Mac OS X

I am trying to run Cassandra on my mac.
I installed it following the steps detailed here: http://www.datastax.com/docs/1.0/getting_started/install_singlenode_root
but when I run:
bin/nodetool ring –h localhost
I get the following error message:
Class JavaLaunchHelper is implemented in both
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/java and
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
How can I make cassandra work?
Many thanks
You are using ancient docs. On a recent version of Cassandra, run the command like this:
bin/nodetool -h localhost ring (see http://www.datastax.com/documentation/cassandra/2.1/cassandra/tools/toolsRing.html)
If you installed vnodes (the default), use nodetool status for an easier-to-read output.
Please use these docs or the docs that match your installation, I doubt you installed Cassandra 1.0. Please check the installation instructions that match the version you downloaded.
CORRECTION: the nodetool ring command worked for me using options in any position on 2.0.10:
bin/nodetool -h localhost ring
bin/nodetool ring -h localhost
and using --h instead of -h
It is a known bug in the JDK but it is not going to stop you from running Cassandra.
What you can do is to set JAVA_HOME variable explicitly.
It will not solve the bug, but it might remedy the error.
This is problem with jdk version, so you have to do the following
unset JAVA_HOME from your terminal.
edit nodetool and assign JAVA variable with jdk version less than jdk7.
JAVA = /Library/Java/JavaVirtualMachines/jdk1.6.0_xx.jdk/Contents/Home/bin/java
then run nodetool, you should be able to go without any issue.

Script Karaf shell commands?

I need to issue Karaf shell commands non-interactively, preferably from a script. More specifically, I need to tell Karaf to feature:install a set of features in an automated way.
# Attempt to install a feature in a way I could script
bash> bin/karaf feature:install myFeature
# Drops me into Karaf shell
karaf> feature:uninstall myFeature
Error executing command: Feature named 'myFeature' is not installed
# Feature wasn't installed
Is this possible? Is there a different way of solving this issue (automated install of a set of Karaf features) that I'm missing?
With bin/karaf you start Karaf with a login prompt, if you want to start Karaf so you can issue commands you first need to start Karaf in server mode. For this use the bin/start shell script. Now you can use either the bin/client or the bin/shell commands to communicate with Karaf in a headless mode.
For example:
./bin/client list
START LEVEL 100 , List Threshold: 50
ID | State | Lvl | Version | Name
----------------------------------------------------------------------------------
72 | Active | 80 | 0 | mvn_org.ops4j.pax.web.samples_war_4.1.0-SNAPSHOT_war
This should work for all versions of Karaf already (maybe not the 2.2.x line ;-) )
If the version you're using is a 3.0.x or higher you might need to add a user to the command.
./bin/client -u karaf list
To issue Karaf shell commands not-interactively, preferably from a script you can also use the Karaf client (scroll down to "Apache Karaf client"). To install features I use command like
/opt/karaf/bin/client -r 7 "feature:install http; feature:install webconsole"
The -r switch allows to retry the connection if the server is not up yet (I use it in a Docker script).
It's possible to issue non-interactive Karaf shell commands using sshpass if keeping the password secrete isn't important.
sshpass -p karaf ssh -tt -p 8101 -o StrictHostKeyChecking=no karaf#localhost feature:install odl-l2switch-switch-ui
Working example in OpenDaylight's Vagrant-based L2Switch Tutorial.
Late to the party, but this problem can easily be solved using the Features Boot configuration, located in the etc/org.apache.karaf.features.cfg file.
According to the following link https://karaf.apache.org/manual/latest/provisioning
A boot feature is automatically installed by Apache Karaf, even if it has not been previously installed using feature:install or FeatureMBean.
There are 2 main properties of this file, the featuresRepositories and featuresBoot.
featuresRepositories contains a list (comma-separated) of features repositories (features XML) URLs.
featuresBoot contains a list (comma-separated) of features to install at boot.
Note that once you update this file, Karaf will attempt to install the features listed in the featuresBoot configuration every time it starts. So if all you are looking to automate is installing features (as per the original question), then this is a great option.
Another option is to use Expect.
This Expect script from OpenDaylight's CI installs and verifies a Karaf feature. Here's an excerpt:
# Install feature
expect "$prompt"
send "feature:install odl-netvirt-openstack\r"
expect {{
"Error executing command: Can't install feature" {{
send_user "\nFailed to install test feature\n"
exit 1
}}
}}
So the general practice is to install the feature, then loop on a bundle:list | grep bundleName to see if the bundles you need are installed. Then you continue on with your test case.

Resources