FTP Adapter Oracle SOA - oracle

I want to read file with the gap of 3mint each. So my BPEL FTP adapter read every file after 3mint. e.g. I have 5 files in a directory and my FTP adapter reads 1st file and after 3 mint he reads 2nd and so on.

In BPEL FTP / File Adapter Configuration Wizard
In the Get File or Read Operation
Next to the file name slide
you can set "Polling Frequency" to 3 Minutes
so that it will poll for the file every 3 minutes
after that check the .jca file it has the below property
<property name="PollingFrequency" value="180"/>
from that you can edit the polling frequency

You can use the following property for your file adapter configuration
property name="PollingFrequency" value="180"
property name="MaxRaiseSize" value="1"

Related

Mainframe pkunzip generates PEX013W Record(s) being truncated to lrecl=

I'm sending binary .gz files from Linux to z/OS via ftps. The file transfers seem to be fine, but when the mainframe folks pkunzip the file, they get a warning:
PEX013W Record(s) being truncated to lrecl= 996. Record# 1 is 1000 bytes.
Currently I’m sending the site commands:
SITE TRAIL
200 SITE command was accepted
SITE CYLINDERS PRIMARY=50 SECONDARY=50
200 SITE command was accepted
SITE RECFM=VB LRECL=1000 BLKSIZE=32000
200 SITE command was accepted
SITE CONDDISP=delete
200 SITE command was accepted
TYPE I
200 Representation type is Image
...
250 Transfer completed successfully.
QUIT
221 Quit command received. Goodbye.
They could read the file after the pkunzip, but having a warning is not a good thing.
Output from pkunzip:
SDSF OUTPUT DISPLAY RMD0063A JOB22093 DSID 103 LINE 25 COLUMNS 02- 81
COMMAND INPUT ===> SCROLL ===> CSR
PCM123I Authorized services are unavailable.
PAM030I INPUT Archive opened: TEST.FTP.SOA5021.GZ
PAM560I ARCHIVE FASTSEEK processing is disabled.
PDA000I DDNAME=SYS00001,DISP_STATUS=MOD,DISP_NORMAL=CATALOG,DISP_ABNORMAL=
PDA000I SPACE_TYPE=TRK,SPACE_TYPE=CYL,SPACE_TYPE=BLK
PDA000I SPACE_PRIMARY=4194304,SPACE_DIRBLKS=5767182,INFO_ALCFMT=00
PDA000I VOLUMES=DPPT71,INFO_CNTL=,INFO_STORCLASS=,INFO_MGMTCLASS=
PDA000I INFO_DATACLASS=,INFO_VSAMRECORG=00,INFO_VSAMKEYOFF=0
PDA000I INFO_COPYDD=,INFO_COPYMDL=,INFO_AVGRECU=00,INFO_DSTYPE=00
PEX013W Record(s) being truncated to lrecl= 996. Record# 1 is 1000 bytes.
PEX002I TEST.FTP.SOA5021
PEX003I Extracted to TEST.FTP.SOA5021I.TXT
PAM140I FILES: EXTRACTED EXCLUDED BYPASSED IN ERROR
PAM140I 1 0 0 0
PMT002I PKUNZIP processing complete. RC=00000004 4(Dec) Start: 12:59:48.86 End
Is there a better set of site commands to transfer a .gz file from Linux to z/OS to avoid this error?
**** Update ****
Using SaggingRufus's answer below, it turns out it doesn't much matter how you send the .gz file, as long as it's binary. His suggestion pointed us to the parameters sent to the pkunzip for the output file, which was VB and was truncating 4 bytes off the record.
Because it is a variable block file, there are 4 bytes allocated to the record attributes. Allocate the file with an LRECL of 1004 and it will be fine.
Rather than generating a .zip file, perhaps generate a .tar.gz file and transfer it to z/OS UNIX? Tar is shipped with z/OS by default, and Rocket Software provides a port of gzip that is optimized for z/OS.

what is the file ORA_DUMMY_FILE.f in oracle?

oracle version: 12.2.0.1
As you know, these are then unix processes for the parallel servers in oracle:
ora_p000_ora12c
ora_p001_ora12c
....
ora_p???_ora12c
They can be seen also with the view: gv$px_process.
The spid for each parallel server can be obtained from there.
Then I look for the open files associated with te parallel server here:
ls -l /proc/<spid>/fd
And I'm obtaining around 500-10000 file descriptors for several parallel servers equal to this one:
991 -> /u01/app/oracle/admin/ora12c/dpdump/676185682F2D4EA0E0530100007FFF5E/ORA_DUMMY_FILE.f (deleted)
I've deleted them using:(actually I've create a small script for doing it because there are thousands of them)
gdb -p <spid>
gdb> p close(<fd_id>)
But after some hours the file descriptors start being created again (hundreds every day)
If they are not deleted then eventually the linux limit is reached and any parallel query throws an error like this:
ORA-12801: error signaled in parallel query server P001
ORA-01116: error in opening database file 132
ORA-01110: data file 132: '/u02/oradata/ora12c/pdbname/tablespacenaname_ts_1.dbf'
ORA-27077: too many files open
Does anyone have any idea of how and why this file descriptors are being created, and how to avoid it?.
Edited: Added some more information that could be useful.
I've tested that when a new PDB is created a directory DATA_PUMP_DIR is created in it (select * from all_directories) that is pointing to:
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>
The linux directory is also created.
Also one file descriptor is created pointing to ORA_DUMMY_FILE.f in the new dpdump subdirectory like the ones described initially
lsof | grep "ORA_DUMMY_FILE.f (deleted)"
/u01/app/oracle/admin/ora12c/dpdump/<xxxxxxxxxxxxx>/ORA_DUMMY_FILE.f (deleted)
This may be ok, the problem I face is the continuos growing of the file descriptors pointing to ORA_DUMMY_FILE that reach the linux limits.

jmeter save response to a file ,the file size up to 10MB, when test file is 50MB

i just test a http request ,it will be download a 50MB file , but the "Save the Responses to a File" save a file up to 10 MB, how to set set that i can download the full size of the file in command line
Add this line document.max_size=0 to your user.properties file or uncomment it in the jmeter.properties file. It is suggested to add to the user.properties file.
The default size for document parsing is 10Mb. If you set this value to zero (0) then there will be no checking for 10 MB. To invoke the properties file, you have to restart your JMeter.
There is httpsampler.max_bytes_to_store_per_request JMeter property which defaults to 10 megabytes so you can increase it according to your response size if you need to get the file stored. It can be amended in 2 ways:
Add the next line to user.properties file (located in JMeter's "bin" folder)
httpsampler.max_bytes_to_store_per_request=62914560
this will increase the maximum response size for HTTP Request samplers to 60 megabytes, JMeter restart will be required to pick the property up
Pass the property via -J command-line argument like:
jmeter -Jhttpsampler.max_bytes_to_store_per_request=62914560 -m -t test.jmx -l test.jtl
in this case restart will not be required, but the property will be changed only for the current JMeter session
References:
Configuring JMeter
Apache JMeter Properties Customization Guide

How to configure xemacs to recognize database specified in .sql-mode?

I am running xemacs with a .sql-mode file containing the following:
1 (setq sql-association-alist
2 '(
3 ("XDBST (mis4) " ("XDBST" "xsius" "password"))
4 ("dev " ("DEVTVAL1" "xsi" "password" "devbilling"))
5 ))
When I log in to the database in xemacs by selecting Utilities->Interactive Mode->Use Association, it logs me in but it does not pick up the database parameter. For example, when I log in to "dev", it logs me in but then when I do "select db_name()" it yields csdb instead of devbilling. It appears that it is picking up the default database associated with the user and ignoring the database parameter. How do you configure xemacs so that it picks up the database parameter specified in .sql-mode when the option is selected?
Thanks,
Mike
I did some more research and xeamcs is using sql-mode.el which on my system is in /usr/local/xemacs/lisp/sql-mode.el to login with SQL Mode. The code in the file does not use the database specified in .sql-mode in Interactive Mode. It does, however, use the database specified in .sql-mode in Batch Mode. You can use Batch Mode as a workaround.

How to transfer a 60mb file to queue using MQ FTE

I am trying to transfer a 60mb file to queue, but Websphere MQ fte stall the transfer and keep recovering. I am running WebSphere MQ FTE on default configuration.
I have tested following scenario with different results according to configuration changes I made.
These commands were issued to create monitor:
fteCreateTransfer -sa AGENT1 -sm TQM.FTE -da AGENT2 -dm QM.FTE -dq FTE.TEST.Q -p QM.FTE -de overwrite -sd delete -gt /var/IBM/WMQFTE/config/TQM.FTE/TEST_TRANSFER.xml D:\\rvs\\tstusrdat\\ALZtoSIP\\INC\\*.zip
fteCreateMonitor -ma AGENT1 -mn TEST_MONITOR -md D:\\rvs\\tstusrdat\\ALZtoSIP\\INC -mt /var/IBM/WMQFTE/config/TQM.FTE/TEST_TRANSFER.xml -tr match,*.zip
Test was performed on files: 53MB and 30MB
Default configuration (just enableQueueInputOutput=true added to AGENT2.properties)
1) all default
no success, transfer status: "recovering"
for both files
2) added maxInputOutputMessageLength=60000000, destination queue max message length changed to 103809024
result transfer status: "failed" with following exception PM71138: BFGIO0178E: A QUEUE WRITE FAILED DUE TO A WMQAPIEXCEPTION WITH MESSAGE TEXT CC=2 RC=2142 MQRC_HEADER_ERROR
for both files
After reading this: http://pic.dhe.ibm.com/infocenter/wmqfte/v7r0/topic/com.ibm.wmqfte.doc/message_length.htm I came with working settings:
3) maxInputOutputMessageLength=34603008 (its maximum value), destination queue max message length still to 103809024
result for file with size 30MB: succcess
result for file with size 53MB: "failed" with following exception PM71138: BFGIO0178E: A QUEUE WRITE FAILED DUE TO A WMQAPIEXCEPTION WITH MESSAGE TEXT CC=2 RC=2142 MQRC_HEADER_ERROR
So according to this I am afraid one can't transfer larger then 34603008 bytes.
If you are transferring file to queue you definitely can't use the default settings. You have to add "enableQueueInputOutput=true" to agent.properties for agent thet uses queue as source or destination.

Resources