Can any one tell me how to exclude system objects while taking backup of queue manager using save queue manager and dump queue manager commands?
Another tool that you can used instead of saveqmgr or dmpmqcfg which can exclude SYSTEM.* objects when making a backup of all your queue manager object definitions is MO71.
Personally, I think it is a bad idea to exclude SYSTEM.* objects. You might have particular values used for SYSTEM.DEFAULT.MODEL.QUEUE, SYSTEM.DEF.SVRCONN, etc... that may be important when rebuilding a queue manager.
You could write a simple shell script or batch file to copy all objects to a new MQSC file but exclude ones like SYSTEM.ADMIN.*, SYSTEM.AUTH.DATA.QUEUE, etc...
No easy way to tell dmpmqcfg to exclude the SYSTEM.* objects. You could issue the command multiple times to include all your other object prefixes, but it would be easier to simple delete the SYSTEM.* objects from the output produced.
Related
How do I set any external database (mysql, postgres I'm not concerned with which one at this point) for usage with metadata?
At the moment I have spring batch writing the results of jobs to Mongodb and that works fine but I'm not keeping track of job status so the jobs are being run from the start every time even if interrupted halfway though.
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
I attempted adding a properties file but that had no effect
# for Postgres:
batch.jdbc.driver=org.postgresql.Driver
batch.jdbc.url=jdbc:postgresql://localhost/postgres
batch.jdbc.user=postgres
batch.jdbc.password=mysecretpassword
batch.database.incrementer.class=org.springframework.jdbc.support.incrementer.PostgreSQLSequenceMaxValueIncrementer
batch.schema.script=classpath:/org/springframework/batch/core/schema-postgresql.sql
batch.drop.script=classpath:/org/springframework/batch/core/schema-drop-postgresql.sql
batch.jdbc.testWhileIdle=false
batch.jdbc.validationQuery=
There are plenty examples of how to avoid doing this, but can't seem to find a clear answer on what I need to configure to send the metadata somewhere real rather than in-memory.
You need to configure a bean of type DataSource in your batch application context (or extend the DefaultBatchConfigurer and set the data source you want to use to store meta-data).
There are many samples here: https://github.com/spring-projects/spring-batch/tree/master/spring-batch-samples
You can find the data source configuration here: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-samples/src/main/resources/data-source-context.xml
How to make Websphere to automatically clean up temp folders during wach start or restart ?
I found out how to manually delete them. But can't ask the customer to do it. Is there some parameter or something that can be set in order to delete the cache/temp files automatically ?
You weren't specific about what cache or temporary files you wanted to delete, but in general, there is no WAS setting to do so. The logging system can be configured to roll log files over, but those aren't temporary files and typically you would want to keep them for some period of time for audit purposes. You also typically don't want to delete caches like the OSGi class cache, unless specifically told to do so by IBM support, so I would't suggest doing it on a server start/restart. The configuration repository uses temporary files that could be deleted on server start/restart. see this IBM KnowledgeCenter topic for details on the location of the files. Having said all that, if you're sure you know what files to delete, I'd suggest wrapping calls to the startServer or stopServer files with your own script(s). These are either batch files on windows platforms, or shell files on other platforms and shouldn't be modified by users. In your wrapper, simply delete the files and then call startServer.
I have to write my response flowfiles in one directory than get data from it change it and then put it inside other dierctory i want to make this two direcotry sync(i mean that whenever i delet, or change flowfile in one directory it should change in other directories too ) I have ore than 10000 flowfiles so chechlist wouldn't be good solution. Can you reccomend me:
any contreoller service which can help me make this?
any better way i can make this task without controller service
You can use a combination of ListFile, FetchFile, and PutFile processors to detect individual file write changes within a file system directory and copy their contents to another directory. This will not detect file deletions however, so I believe a better solution is to use rsync within an ExecuteProcess processor.
To the best of my knowledge, rsync does not work on HDFS file systems, so in that case I would recommend using a tool like Helix or DistCp (I have not evaluated these tools in particular). You can either invoke them from the "command line" via ExecuteProcess or wrapping a client library in an ExecuteScript or custom processor.
I have a spring batch integration where multiple servers are polling a single file directory. This causes a problem where a file can be processed up by more than one. I have attempted to add a nio-lock onto the file once a server has got it but this locks the file for processing so it can't read the contents of the file.
Is there a spring batch/integration solution to this problem or is there a way to rename the file as soon as it is picked up by a node?
Consider to use FileSystemPersistentAcceptOnceFileListFilter with the shared MetadataStore: http://docs.spring.io/spring-integration/reference/html/system-management-chapter.html#metadata-store
So, only one instance of your application will be able to pick up a file.
Even if we find a solution for nio-lock, you should understand that lock means "do not touch until freed". Therefore when one instance has done its work, another one is ready to pick up the file. I guess that isn't your goal.
I am facing a scenario where I have to allow access to a file for multiple instances of the same executable, but deny access to the file to all other executables.
For example, if I have a file foo.txt and an executable proc.exe then any number of prox.exe instances should be able to access and modify foo.txt but no other process should be able to access or modify this file.
You can't do this based directly on which executable a process is running. However, you can make your processes co-operate with one another, so that the only processes that can access the file are those that know how to do it.
One particularly simple approach would be to create a named file mapping object for the file using CreateFileMapping(). Only processes that know the name of the file mapping would be able to access it. However, you would then only be able to access the file via memory mapping, not via normal I/O functions.
DuplicateHandle() provides another option, but because the duplicated handle shares a single file object you need to be very careful how you use it. Overlapped I/O is probably the safest approach, as it explicitly supports multiple simultaneous operations on the same object.