How to make Websphere to automatically clean up temp folders during wach start or restart ?
I found out how to manually delete them. But can't ask the customer to do it. Is there some parameter or something that can be set in order to delete the cache/temp files automatically ?
You weren't specific about what cache or temporary files you wanted to delete, but in general, there is no WAS setting to do so. The logging system can be configured to roll log files over, but those aren't temporary files and typically you would want to keep them for some period of time for audit purposes. You also typically don't want to delete caches like the OSGi class cache, unless specifically told to do so by IBM support, so I would't suggest doing it on a server start/restart. The configuration repository uses temporary files that could be deleted on server start/restart. see this IBM KnowledgeCenter topic for details on the location of the files. Having said all that, if you're sure you know what files to delete, I'd suggest wrapping calls to the startServer or stopServer files with your own script(s). These are either batch files on windows platforms, or shell files on other platforms and shouldn't be modified by users. In your wrapper, simply delete the files and then call startServer.
Related
I want to add 3 extension to nifi (nifi-encryptMD5-nar-1.0.nar-unpacked,nifi-getOperator-nar-1.0-SNAPSHOT.nar-unpacked,nifi-splitAttributeValue-nar-1.0.nar-unpacked)
I added the extensions folder in the directory /opt/nifi/nifi-1.9.2/work/nar/extensions/
then when I restart the nifi service, nifi turns off and does not turn on, when I force the start with the user nifi, nifi turns on but the extentions have been deleted from the directory /opt/nifi/nifi-1.9.2/work/nar/extensions/
you have to put *.nar packages into nifi/lib directory.
nifi will extract it automatically on startup into nifi/work folder.
As daggett says, you need to use the .nar files, not any unpacked directories.
In your nifi.properties there will be two or more properties that provide locations for NiFi libraries:
nifi.nar.library.directory=./lib
nifi.nar.library.autoload.directory=./extensions
nifi.nar.library.directory.<something>=./<yourdir>
The first is the default and contains all the basic NiFi files. It is only checked on startup and any valid nars found are unpacked in the work directory and loaded. Generally you don't want to add anything here except in test environments as it complicates upgrades.
The second is empty by default but it is scanned every 30 seconds for new .nars. These will be unpacked and loaded if possible, but only for new libraries. Already loaded libraries will not be reloaded.
This is a good location to add your validated custom libraries without having to restart NiFi.
The third and further need to be added manually to the properties file. These are loaded on startup only and useful if you have a lot of custom processors and want to keep them organized.
In your situation I'd put the .nars in the extensions folder and check the logs to see if they were loaded successfully. You'll then need a full refresh of the browser window (Shift+F5 I think) before they show up in the list of processors.
In a cluster setup, add the .nars on all nodes and verify their availability before trying to add them to the canvas or things might get messy.
Can I in svn hooks for Windows to write a command which relocate automatically some folders to another location in repository?
Hook must run at server
For example: Users commit files in his working copy (C:svnworkingcopy\dev)
At server will run a hook and automatically relocated or copy this files into another folder of repository.(https://svnserver/onlyread)
Where this user have permission to read only.
Thnk !
svn switch --relocate a user's working copy with a hook script? Looks like you are confusing the terms. Nevertheless I advise you to check the following warning in SVNBook:
While hook scripts can do almost anything, there is one dimension in
which hook script authors should show restraint: do not modify a
commit transaction using hook scripts. While it might be tempting to
use hook scripts to automatically correct errors, shortcomings, or
policy violations present in the files being committed, doing so can
cause problems. Subversion keeps client-side caches of certain bits of
repository data, and if you change a commit transaction in this way,
those caches become indetectably stale. This inconsistency can lead to
surprising and unexpected behavior. Instead of modifying the
transaction, you should simply validate the transaction in the
pre-commit hook and reject the commit if it does not meet the desired
requirements. As a bonus, your users will learn the value of careful,
compliance-minded work habits.
The file names seesm to point to our WAS data sources. However, we're not sure what is creating them and why there are so many. The servers didn't seem to crash. Why is WAS 6.1.0.23 creating these andy why aren't they being cleaned?
There are many files like these, with some going up to xxx.43.lck
DWSqlLog0.0.lck
DWSqlLog0.0
TritonSqlLog0.0.lck
TritonSqlLog0.0
JTSqlLog0.0
JTSqlLog0.1
JTSqlLog0.3
JTSqlLog0.2
JTSqlLog0.4.lck
JTSqlLog0.4
JTSqlLog0.3.lck
JTSqlLog0.2.lck
JTSqlLog0.1.lck
JTSqlLog0.0.lck
WAS uses JDK Logging and JDK logger creates such files with extension .0,.1 etc along with the .lck file so that the WAS runtime has a lock to these files that it writes to.
Cheers
Manglu
When I run my Eclipse RCP application, it creates a whole lot of directories in my $HOME/.eclipse directory. What is this?
I don't want the files there, how can I hinder them from getting there? The rational for this: the application must run very clean and only leave files at one specific location (not $HOME/.eclipse).
I'd figured it was controlled by osgi.instance.area so tried to set this to different values (a directory, #none, #noDfault etc...) but can't stop the application from creating directories in $HOME/.eclipse. -data and other arguments works as expected.
On my system the only thing that is stored in .eclipse is the Equinox Secure Storage. Here is the blurb on the doc page for that:
By default, secure storage is located in your home directory. On Windows that typically resolves to "C:\Documents and Settings\.eclipse\org.eclipse.equinox.security". This location is selected to allow multiple Eclipse-based applications to share the same secure storage.
If you would like to modify the location of the default secure storage, you can use the "-eclipse.keyring " runtime option. The is a path to the file which is used to persist the secure storage data.
Here is the online reference.
For an application I'm writing, i want to programatically find out what computer on the network a file came from. How can I best accomplish this?
Do I need to monitor network transactions or is this data stored somewhere in Windows?
When a file is copied to the local system Windows does not keep any record of where it was copied. So unless the application that created it saved such information in the file then it will be lost.
With file auditing file and directory operations can be tracked, but I don't think that will include the source path with file copies (just who created it and when).
Yes, it seems like you would either need to detect the file transfer based on interception of network traffic, or if you have the ability to alter the file in some way, use public key cryptography to sign files using a machine-specific key before they are transferred.
Create a service on either the destination computer, or on the file hosting computers which will add records to an Alternate Data Stream attached to each file, much the way that Windows handles ZoneInfo for files downloaded from the internet.
You can have a background process on machine A which "tags" each file as having been tagged by machine A on such-and-such a date and time. Then when machine B downloads the file, assuming we are using NTFS filesystems, it can see the tag from A. Or, if you can't have a process at the server, you can use NTFS streams on the "client" side via packet sniffing methods as others have described. The bonus here is that future file-copies will retain the data as long as it is between NTFS systems.
Alternative: create a requirement that all file transfers must be done through a Web portal (as opposed to network drag-and-drop). Built in logging. Or other type of file retrieval proxy. Do you have control over procedures such as this?