Oracle Listener utility and batch file - oracle

I have 03 listener on same database server. The listeners are: listener_cympp1(1522), listener_cymap1 (1523), listener_cympd1 (1524)
How can I change name of listener log by batch file automaticall when the log is over 100 MB?
Which syntax can I use to set the listener name in this command? First executing "lsnrctl set current_listener listener_cympp1" doesn't help.
kind regards,

If I understand correctly you want rotate the listener log files with 100MB size log files. In addition, you request of explicitly specifying the log file name of each listener.
You can perform this without the need of a batch or shell script. The listener can be configured to do the log rotate based on size and number of files.
You can find all relevant information on Oracle documentation for the listener configuration parameters here.
The parameters of interest are:
TRACE_DIRECTORY_listener_name
TRACE_FILE_listener_name
TRACE_FILELEN_listener_name
TRACE_FILENO_listener_name
You can set the parameter values per listener.

Related

Regarding WebDriver Sampler and Config/Listener Folder Path

While trying to write Webdriver sampler with Config/listener element ,I have below issue ,Could any team assist me for the same?
1:- In config /listener element/webdriver browser setup file , if we want to enter some value from external resource (' or if we want to save summary report in PC, Is there any procedure to give unique path that can be run in any workstation /pc/ any directory after giving file name only because if we execute in other station or move file in other directory, everytime we need to change file location?Could you please guide me for the same?
While writing webdriver sampler request,I am able to execute script but i am getting below error in log viewer window and also wanted to break functionality as very small unit label for one webdriver sampler request{launch site/login successfully,validate record, logout} so after searching on google , i used sub sample start or samplestart function multiple time , but i am not getting label name in view tree listener result after setting one jmeter property. Could you please guide me for the same?
3:- Could we run three thread group at same time(all three thread run at same time) or some interval (first and sec run on same time but third start after 10 minute)
Thanks you for giving valuable time in advance?
Thanks Amit
You can give only the filename and JMeter will look for the CSV file and write results file into JMeter's "bin" folder (or the place where you launched JMeter from). You can also use __P() function to parameterize the file name or even path, for example if you do set your CSV Data Set Config like:
${__P(testdata,TestData.csv)}
you will be able to override the path using -J command-line argument like:
jmeter -Jtestdata=/path/to/somefile.csv
If you don't provide the property, default value of TestData.csv will be used. More information: Apache JMeter Properties Customization Guide
You're getting this "Invalid call sequence" error because you have duplicate WDS.sampleResult.sampleStart() function call, just remove one of them and that would be it.
Add a Constant Timer as a child of the first request or Flow Control Action sampler at the beginning of the Thread Group 3 and configure them to sleep for 600000 milliseconds.

JMeter: CSV Data Set Config "Lines are read at the start of each test iteration." - how exactly should it work?

I'm concerned with work of CSV Data Set Config along JMeter rules set with scoping rules and execution order.
For CSV Data Set Config it is said "Lines are read at the start of each test iteration.". At first I thought that talks about threads, then I've read Use jmeter to test multiple Websites where config is put inside loop controller and lines are read each loop iteration. I've tested with now 5.1.1 and it works. But if I put config at root of test plan, then in will read new line only each thread iteration. Can I expect such behaviour based on docs only w/out try-and-error? I cannot see how it flows from scoping+exec order+docs on csv config element. Am I missing something?
I would appreciate some ideas why such factual behaviour is convenient and why functionality was implemented this way.
P.S. how can I read one line cvs to vars at start of test and then stop running that config to save CPU time? In 2.x version there was VariablesFromCSV config for that...
The Thread Group has an implicit Loop Controller inside it:
the next line from CSV will be read as soon as LoopIterationListener.iterationStart() event occurs, no matter of origin
It is safe to use CSV Data Set Config as it doesn't keep the whole file in the memory, it reads the next line only when the aforementioned iterationStart() event occurs. However it keeps an open file handle. If you do have really a lot of RAM and not enough file handles you can read the file into memory at the beginning of the test using i.e. setUp Thread Group and JSR223 Sampler with the following code
SampleResult.setIgnore()
new File('/path/to/csv/file').readLines().eachWithIndex { line, index ->
props.put('line_' + (index + 1), line)
}
once done you will be able to refer the first line using __P() function as ${__P(line_1,)}, second line as ${__P(line_2,)}, etc.

My JDBC connection is paramterized using Property file and I am reading value using ${__P(propertyname)}

Jmeter My JDBC connection is paramterize using Property file and I am reading value using ${__P(propertyname)} but it is reading only first value and not reading second value from property file.
Created a custom property file
Server1=10:1433;Instance=SQL2014;DatabaseName=Repository15;domain=ABC
Server2=20:1433;Instance=SQL2008R2;DatabaseName=Repository14;domain=ABC
In Jmeter Test Plan i have two threads. Under each thread i have added JDBC Connection Configuration
First Thread Database URL = ${__P(Server1)}
Second Thread Database URL = ${__P(Server2)}
when we run the test it is reading only first property file and ending the connection
You need to define different variable name for each JDBC Connection Configuration and connect the relevant JDBC Requests for each.
Each name must be different. If there are two configuration elements using the same name, only one will be saved. JMeter logs a message if a duplicate name is detected.

Apache flume rolling out the HDFS files on hourly basis

I'm new to Flume and I was exploring options to roll over my HDFS files on hourly basis using Flume.
In my project Apache Flume will read the messages from Rabbit MQ and it will write it to HDFS.
hdfs.rollInterval - It closes the file based on the time interval when it got opened.
New file will be created only when Flume reads a message after the file got closed. This option is not solving our problem.
hdfs.path = /%y/%m/%d/%H - This option is working fine and it creates folder on hourly basis. But the problem is new folder will be created only when new message comes.
For example: Messages are coming till 11.59, the file will be in open state. Then the messages stop coming till 12.30. But, the file will still be in open state. After 12.30 new message comes. Then because of hdfs.path configuration, previous file will be closed and new file will be created in new folder.
Previous file cannot be used for computation till it is closed.
We need an option of closing the opened files on hourly basis perfectly. I'm wondering if there any options in flume for doing that.
hdfs.rollInterval is described as
Number of seconds to wait before rolling current file
So this line should cause the files to allocate for an hour at a time
hdfs.rollInterval = 3600
And I would additionally ignore file size and event count, so add these as well
hdfs.rollSize = 0
hdfs.rollCount = 0
hdfs.idleTimeout
Timeout after which inactive files get closed (0 = disable automatic closing of idle files)
For example, you can set this property to 180. The file will be opened

OAS 10g -10.1.2.3.0 forms and reports server, opmn.log is not getting updated

Hi Can you tell which way to drive the analysis for my issue opmn.log is not getting updated. All the instances are working fine and individual instances logs are getting updated but $ORACLE_HOME/opmn/logs/opmn.log is always 0kb since a week time. I could not find any statements reg opmn.log on opmn.xml.
There is a difference in Logging mechanism between (9.0.2 to 10.1.2) and (10.1.3 to later).
(9.0.2 to 10.1.2) log-file path="x" level="x" rotation-size="x"
(10.1.3 to later) log path="x" comp="x;y;z" rotation-size="x"
(9.0.2 to 10.1.2) :
ORACLE_HOME/opmn/logs/ipm.log:
Review the error codes and messages that are shown in the ipm.log file. The PM portion of OPMN generates and outputs the error messages in this file. The ipm.log file tracks command execution and operation progress. The level of detail that gets logged in the ipm.log can be modified by configuration in the opmn.xml file. Refer to Chapter 3, "opmn.xml Common Configuration" for examples of debug levels.
ORACLE_HOME/opmn/logs/ons.log:
Use the ons.log file to debug the ONS portion of OPMN or for early OPMN errors. The ONS portion of OPMN is initialized before PM, and so errors that occur early in OPMN initialization will show up in the ons.log file.
ORACLE_HOME/opmn/logs/opmn.log:
The opmn.log file contains output generated by OPMN when the ipm.log and ons.log files are not available. Typically, the only output written to the opmn.log file will be the exit status of a child OPMN process. A status code of 4 indicates a normal reload of OPMN. All other status codes indicate an abnormal termination of the child OPMN process.
ipm.log(10.1.2) similar to opmn.log(10.1.3)
ons.log(10.1.2) similar to opmn.dbg(10.1.3)

Resources