migration.conf file while installing splunk universal forwarder - installation

Please let me know what exactly is migration.conf file created during installation of splunk UF ?
File path : /opt/splunkforwarder/etc/system/local/migration.conf
Thanks in Advance,
NVP

Have you looked at the contents of the file? It shows what Splunk did when a new version of the forwarder was installed. When the new version runs for the first time, a number of checks are run and files may be modified or removed. The migration.conf file lists each action that was performed.
It's a good idea to review this log after each upgrade, because it may identify local changes that override new features.

Related

Configuring settings for last paricipant support wsadmin/websphere

Recently i've came to an issue to configure Last Participant Support on deployed application. I've found some old post about that:
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014090728
On server itself i found how to do it. But with jython or wsadmin commands im not able to find how to do it on application itself.
But it does not help for me. Any ideas?
There is no command assistance available for the action of changing last participant support from the admin console which typically implies there is no scripting command associated the action. And there doesn't appear to be an wsadmin AdminApp command to modify the property. Looking at config repo changes made as a result of the admin console action, the IBM Programming Model Extensions (PME) deployment descriptor file "ibm-application-ext-pme.xmi" for an application is created/modified by the action.
If possible, the best long-term solution would be to use a tool like RAD to generate that extensions file when packaging the application because if you need to redeploy the app, your config changes wouldn't get overridden. If you can't mod the app, you can script the addition of an PME descriptor file in each of the desired apps with the knowledge that redeploying the app will overwrite your changes. The changes can be made by doing something along the lines of:
1) create a text file named ibm-application-ext-pme.xmi with contents similar to this:
<pmeext:PMEApplicationExtension xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:pmeext="http://www.ibm.com/websphere/appserver/schemas/5.0/pmeext.xmi" xmi:id="PMEApplicationExtension_1559836881290">
<lastParticipantSupportExtension xmi:id="LastParticipantSupportExtension_1559836881292" acceptHeuristicHazard="false"/>
</pmeext:PMEApplicationExtension>
2) in wsadmin or your jython script do the following (note in this example the xmi file you created is in the current directory, if not, include the full path to it in the createDocument command) :
deployUri = "cells/<your_cell_name>/applications/<your_app_name>.ear/deployments/<your_app_name>/META-INF/ibm-application-ext-pme.xmi"
AdminConfig.createDocument(deployUri, "ibm-application-ext-pme.xmi")
AdminConfig.save()
3) restart the server

How can I batch Kafka reads to Elasticsearch

I'm not too familiar with Kafka but I would like to know what's the best way to
read data in batches from Kafka so I can use Elasticsearch Bulk Api to load the data faster and reliably.
Btw, am using Vertx for my Kafka consumer
Thank you,
I cannot tell if this is the best approach or not, but when I started looking for similar functionality I could not find any readily available frameworks. I found this project:
https://github.com/reachkrishnaraj/kafka-elasticsearch-standalone-consumer/tree/branch2.0
and started contributing to it as it was not doing everything I wanted, and was also not easily scalable. Now the 2.0 version is quite reliable and we use it in production in our company processing/indexing 300M+ events per day.
This is not a self-promotion :) - just sharing how we do the same type of work. There might be other options right now as well, of course.
https://github.com/confluentinc/kafka-connect-elasticsearch
Or You can try this source
https://github.com/reachkrishnaraj/kafka-elasticsearch-standalone-consumer
Running as a standard Jar
**1. Download the code into a $INDEXER_HOME dir.
**2. cp $INDEXER_HOME/src/main/resources/kafka-es-indexer.properties.template /your/absolute/path/kafka-es-indexer.properties file - update all relevant properties as explained in the comments
**3. cp $INDEXER_HOME/src/main/resources/logback.xml.template /your/absolute/path/logback.xml
specify directory you want to store logs in:
adjust values of max sizes and number of log files as needed
**4. build/create the app jar (make sure you have MAven installed):
cd $INDEXER_HOME
mvn clean package
The kafka-es-indexer-2.0.jar will be created in the $INDEXER_HOME/bin. All dependencies will be placed into $INDEXER_HOME/bin/lib. All JAR dependencies are linked via kafka-es-indexer-2.0.jar manifest.
**5. edit your $INDEXER_HOME/run_indexer.sh script: -- make it executable if needed (chmod a+x $INDEXER_HOME/run_indexer.sh) -- update properties marked with "CHANGE FOR YOUR ENV" comments - according to your environment
**6. run the app [use JDK1.8] :
./run_indexer.sh
I used spark streaming and the it was quite a simple implementation using Scala.

In odoo 8 server "--auto-reload" when work

Actually In the command of start odoo 8 server.
It will provide "--auto-reload" option
But actually i don't know how it works and when to work.
Please if give me some guideline for that
Normally if you change your python code means, you need to restart the server in order to apply the new changes.
--auto-reload parameter is enabled means, you don't need to restart the server. It enables auto-reloading of python files and xml files without having to restart the server. It required pyinotify. It is a Python module for monitoring filesystems changes.
Just add --auto-reload in your configuration file. By default the value will be "false". You don't need to pass any extra arguments. --auto-reload is enough. If everything setup and works properly you will get
openerp.service.server: Watching addons folder /opt/odoo/v8.0/addons
openerp.service.server: AutoReload watcher running
in the server log. Don't forget to install pyinotify package.
I found this looking for the same thing, but for odoo 10. Someone will follow the same route, so:
This has been changed in odoo 10 to --dev=reload. BUT you can't specify that in /etc/init.d/odoo itself. Only from the command line.

logstash forwareder doesn't release file handle

I am running logstash forwareder to ship logs.
Forwarder,logstash,elasticsearch all are on localhost.
I have one UI application whose log files is read by shipper. When forwarder is running, archiving of log file doesn't work. logs are appended in same file. I have configured log file to archive every minute, so I can see the change. As soon as I stop forwarder, log file archiving starts working.
I guess forwarder keep holding file handle that's why file does not get archived.
Please help me on this.
regards,
Sunil
Running on windows? There exists known unfixed bugs.
See https://github.com/elasticsearch/logstash/issues/1944
for some kind of work around.

Unable to access file adapter using JMS

I am not able to retrieve records from flat file using fileadapter ver 5.6 with JMS. It always show this error at console,
Startup error. SDK Error: Could not open JMS shared library jms, DllError.
The error occurred on starting the adapter after initialization. The Repository URL is D:\bala\input\Work\AT_adfiles_53689.dat and the Configuration URL is Fileadapter/FileAdapterConfiguration..
Its working fine with RV but not with JMS. Kindly help me out..
I found the solution to the problem above. First look into the AT_adfiles_xxxxx.tra under your working adapter directory. Look for the line where it said "tibco.env.PATH=xxxxx"
First of all, look into all those bins directory, you will find some of the bin folder actually contain libeay32.dll" and "ssleay32.dll". The problem is where the sdk\5.5\bin contain different version of libeay32.dll" and "ssleay32.dll" to other folder. In order for you to run this correctly, all of libeay32.dll" and "ssleay32.dll" should be in the same version.
So which ever version you decided to use, make a copy of that to other folders that contain the same file. What i did to preserve the original version of those is by renaming the original with .bak at the end.
This should allow you to test the file adapter!

Resources