I'm using WER to generate crash dumps for my application using this method:
https://learn.microsoft.com/en-us/windows/win32/wer/collecting-user-mode-dumps
My app runs in pods in Kubernetes, and I'm writing these dumps to a network share mounted in the pods. I then have a web application that queries this share for dump files and displays them in a view for download.
The problem I'm running into is the file name of the dumps. The file name is in this format:
<exe name><PID>.dmp
I'd like to add some identifying information (particularly hostname) to the name of the file. Is this possible? I've been searching Google since yesterday (using various phrasing, etc.) but I'm coming up blank.
Related
I'm a Windows developer finally getting my feet wet on Mac - I've already stumbled across translocation.
I have a Qt-based application that I am porting to Mac and have a few basic types of user data. The app is distributed currently via zip file
1) Settings/config data. I understand this belongs in a plist file
2) XML-based application data. This data is intended to be edited by both users and the application. In most cases, it will only be power users that manually edit these files.
3) Image-based (jpg/png) in-application icons. This data is used by the application and expected to be created/provided separately by users.
On Windows, both 2 and 3 are simply located in subdirectories of the .exe.
What are the options or "correct" locations for such application data?
The usual location is a custom folder in the Application Support directory. This directory exists in the local domain (/Library/Application Support/) to save data for all users and in the user domain (~/Library/Application Support/) to save data per user.
There is a convention to name the custom folder in Application Support with the bundle identifier of the application but this is not mandatory.
While the Application Support directory is created implicitly your app is responsible to create the custom folder.
(NS)FileManager provides an API to get the Application Support directory without hard-coding paths. If your app is sandboxed you must use this (NS)FileManager API anyway
In the folder <WAS Liberty Profile root>\<profile>\usr\servers\defaultServer there are many files named core.*.dmp and heapdump.*.phd. The size of these files is between 130 MB and 1.3 GB when my deployed app uses 4 MB.
Can I delete these files *.dmp and *.phd?
What are these files for?
Short answer: yes, it's safe to delete them, but you should find out why they're appearing, as it could indicate that your application is not running correctly.
If your dump files were created a long time ago, or you know you were debugging an OutOfMemoryException or have been running server javadump --include=heap,system then go ahead and delete the files. If, however, you keep getting new dump files and don't know why then read on.
The core and heapdump files contain a snapshot of the memory of the application from a specific point in time. Usually you do this to capture the state of your application at the point where something goes wrong so that you can examine it with analysis tools and try to work out what went wrong.
For example, by default the IBM JVM will perform a dump when an OutOfMemoryException is thrown. This allows you to look at the dump file and see what's using up all the memory.
If you have a corresponding javacore file, the fourth line or so should say why the memory dump was made.
e.g. 1TISIGINFO Dump Requested By User (00100000) Through com.ibm.jvm.Dump.javaDumpToFile (caused by running server javadump)
or 1TISIGINFO Dump Event "user" (00004000) received (caused by running kill -3)
If it's a "user" event, then something's asking the JVM to create a dump. If not, and you're still not sure what's causing it, check your jvm.options file for any -Xdump options which can be used to cause the JVM to create a dump in response to certain events. More information on that in the Knowledge Center.
I have a windows network in which many files are shared across many users with full control. I have a folder shared for everyone in my system, so whenever I try to access it using the machine name (run->\Servername) from another system, I can see the shared folder and open/write files in it.
But my requirement is to close any open files(in my system) in network. So I used NetFileEnum to list all opened file ids so that I can close those files using NetFileClose API.
But the problem is NetFileEnum returns invalid junk ids like 111092900, -1100100090 etc so that I can't close it from another machine. So I listed the network opened files using net file command and by noting the id, say it be 43 I hard coded the id in my function call NetFileClose("Servername", 43); But when I executed, I got ACCESS_DENIED_ERROR. But if the same code is run on the server, it is successfully closing the files. I had given full permission in share for all users.
But why ACCESS_DENIED_ERROR and why NetFileEnum returning invalid ids? Is there anything to be done for this API to work? How can I use these APIs properly to close network opened files?
I have 10 applications they have same logic to write the log on a text file located on the application root folder.
I have an application which reads the log files of all the applicaiton and shows details in a web page.
Can the same be achieved on Windows Azure? I don't want to use the 'DiagnosticMonitor' API's. As I cannot change logging logic of application.
Thanks,
Aman
Even if technically this is possible, this is not advisable as the Fabric Controller can re-create any role at a whim (well - with good reasons, but unpredictable none-the-less) and so whenever this happens you will lose any files stored locally on a role.
So - primarily you should be looking for a different place to store those logs, and there are many options, but all require that you change the logging logic of the application.
You could do this, but aside from the issue Yossi pointed out (the log would be ephemeral; it could get deleted at any time), you'd have a different log file on each role instance (VM). That means when you hit your web page to view the log, you'd see whatever happened to be on the log on that particular VM, instead of what you presumably want (a roll-up of the log files across all VMs).
Windows Azure Diagnostics could help, since you can configure it to copy log files off to blob storage (so no need to change the logging). But honestly I find Diagnostics a bit cumbersome for this. It will end up creating a lot of different blobs, and you'll have to change the log viewer to read all those blobs and combine them.
I personally would suggest writing a separate piece of code that monitors the log file and, for each new line, stores the line as an entity (row) in table storage. This bit of code could be launched as a startup task and just run continuously as a separate process (leaving everything else unchanged). Then modify the log viewer to read the last n entities from table storage and display them.
(I'm assuming you can modify the log viewer even if you can't modify the apps that log to the file.)
What about writing logs to something like azure storage table? Just need to define unique ParitionKey/RowKey, then you can easily retrieve the log for the web page.
For an application I'm writing, i want to programatically find out what computer on the network a file came from. How can I best accomplish this?
Do I need to monitor network transactions or is this data stored somewhere in Windows?
When a file is copied to the local system Windows does not keep any record of where it was copied. So unless the application that created it saved such information in the file then it will be lost.
With file auditing file and directory operations can be tracked, but I don't think that will include the source path with file copies (just who created it and when).
Yes, it seems like you would either need to detect the file transfer based on interception of network traffic, or if you have the ability to alter the file in some way, use public key cryptography to sign files using a machine-specific key before they are transferred.
Create a service on either the destination computer, or on the file hosting computers which will add records to an Alternate Data Stream attached to each file, much the way that Windows handles ZoneInfo for files downloaded from the internet.
You can have a background process on machine A which "tags" each file as having been tagged by machine A on such-and-such a date and time. Then when machine B downloads the file, assuming we are using NTFS filesystems, it can see the tag from A. Or, if you can't have a process at the server, you can use NTFS streams on the "client" side via packet sniffing methods as others have described. The bonus here is that future file-copies will retain the data as long as it is between NTFS systems.
Alternative: create a requirement that all file transfers must be done through a Web portal (as opposed to network drag-and-drop). Built in logging. Or other type of file retrieval proxy. Do you have control over procedures such as this?