I want to raise the maximum number of threads in the default work manager's thread pool using a wsadmin (Jython) script. What is the best approach?
I can't seem to find documentation of a fine-grained control that would let me modify just this property. The closest I can find to what I want is AdminTask.applyConfigProperties, which requires passing a file. The documentation explains that if you want to modify an existing property, you must extract the existing properties file, edit it in an editor, and then pass the edited file to applyConfigProperties.
I want to avoid the manual step of extracting the existing properties file and editing it. The scripts needs to run completely unattended. In fact, I'd prefer to not use a file at all, but just set the property to a value directly in the script.
Something like the following pseudo-code:
defaultwmId = AdminConfig.getid("wm/default")
AdminTask.setProperty(defaultwmId, ['-propertyName', maxThreads, '-propertyValue', 20])
The following represents a fairly simplistic wsadmin approach to updating the max threads on the default work managers:
workManagers = AdminConfig.getid("/WorkManagerInfo:DefaultWorkManager/").splitlines()
for workManager in workManagers :
AdminConfig.modify(workManager, '[[maxThreads "20"]]')
AdminConfig.save()
Note that the first line will retrieve all of the default work managers across all scopes, so if you want to only choose one (for example, if you only one to modify a particular application server or cluster's work manager properties), you will need to refine the containment path further. Also, you may need to synchronize the nodes and restart the modified servers in order for the property to be applied at runtime.
More information on the use of the AdminConfig scripting object can be found in the WAS InfoCenter:
http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd.doc/info/ae/ae/rxml_adminconfig1.html
Related
In Opensips there is an option to cache all the db_text at startup or to scan the dbtext every time it is queried by using the following line in the opensips.cfg file:
modparam("db_text", "db_mode", 0)
Documentation
My question is if it is possible to change this behavior at runtime, or do I need to change the config file and restart the server every time?
The db_mode module parameter of db_text cannot be changed at runtime.
Depending on your needs, however, db_mode = 0 combined with occasional dbt_reload MI commands might be superior to using on-demand caching (db_mode = 1).
I want to know how to (or can I) parameterize the parm file name in informatica?
little bit of background. I am building a standard map in informatica. Which business users can call directly after selecting the standard filters they want to apply in the map using a GUI.
The parm file name will be given by business users and all the filters that he/she selected will be in parm. The file will be dropped in the parm folder in informatica server.
This is a good case scenario, when only 1 users is using it at 1 point of time.
Also, I want to find out what should I do when multiple users are working on GUI and generating the parm files and invoking the informatica map. How do I get multiple instences of the same map running at the same time?
I hope I am making sense here....
Thanks!!!
You can achieve this by using concurrent execution of the workflow. Read about it and understand how can you implement it.
Once you know how to implement it, use a backend script/code by the gui to assign an instance name to each call through GUI. For each instance name, you can have an individual parameter file. (I believe that there would be a finite set of combination of variable values in your case). You can use below command to call individual instances, (either through you GUI or by any other backend code.
pmcmd %workflow_name% %informatica_folder_name%
-paramfile %paramfilepathandname% -rin %instance_name%
It might sound a bit confusing, but once you understand how concurrent workflows work, you can build on it based on the above input.
It'll be only possible if you call the Informatica from external tool, not the Client tools. One way is described by #Utsav, the other is when you use Informatica WSH to call a Workflow - you can indicate the parameterfile you want to be used with the workflow, as well as desired instance name.
I Think this guide to concurrent workflows May be what you are looking for:
https://kb.informatica.com/howto/6/Pages/17/301264.aspx
I am looking for a solution for dynamically changing the channel_variable, destination_number without needing to reloadxml (as it might affect ongoing or incoming call). So basically, FS has to wait till I provide it with appropriate destination_number. Till now, I have been doing it XML way (editing XML files) and then reloadxml command at FS prompt. But that is not viable for my requirement
You can use Lua(or any other freeswitch supported scripting language) script for this. Using Lua you can write custom script with very sophisticated logic.
More details:
https://freeswitch.org/confluence/display/FREESWITCH/Lua+API+Reference
I have a simple question, as I am new to NiFi.
I have a GetTwitter processor set up and configured (assuming correctly). I have the Twitter Endpoint set to Sample Endpoint. I run the processor and it runs, but nothing happens. I get no input/output
How do I troubleshoot what it is doing (or in this case not doing)?
A couple things you might look at:
What activity does the processor show? You can look at the metrics to see if anything has been attempted (Tasks/Time) as well as if it succeeded (Out)
Stop the downstream processor temporarily to make any output FlowFiles visible in the connection queue.
Are there errors? Typically these appear in the top-left corner as a yellow icon
Are there related messages in the logs/nifi-app.log file?
It might also help us help you if you describe the GetTwitter Property settings a bit more. Can you share a screenshot (minus keys)?
In my case its because there are two sensitive values set. According to the documentation when a sensitive value is set, the nifi.properties file's nifi.sensitive.props.key value must be set - it is an empty string by default using HortonWorks DataPlatform distribution. I set this to some random string (literally random_STRING but you can use anything) and re-created my process from the template and it began working.
In general I suppose this topic can be debugged by setting the loglevel to DEBUG.
However, in my case the issue was resolved more easily:
I just set up a new cluster, and decided to copy all twitter keys and secrets to notepad first.
It turns out that despite carefully copying the keys from twitter, one of them had a leading tab. When pasting directly into the GetTwitter processer, this would not show, but fortunately it showed up in notepad and I was able to remove it and make this work.
How to check the value of a particular parameter(say io.sort.mb) on hadoop while I'm running a benchmark(say teragen)?
I know you can always go to configuration files and see that but I have many configuration files plus some parameters get overwritten(like number of map tasks).
I don't have GUI. Is there any command to see this?
Thanks!
Whatever you set in the configuration files should be available in your job.xml. This should be available in your job tracker in the specific job entry.
Through a code these values will be available in the Configuration object.