Windows Server 2012 - MSMQ - Journaling keeps disabling itself - windows

I'm trying to enabling journaling on my 2012 server - I have x2 queues on the host. One, I was able to enable this function and it has since stayed enabled. The other queue, I enable journaling on and it seems to be enabled, but after a few refreshes I go back into properties and it's turned itself off again.
The queue is used by a few applications - An IIS app and a custom service. Haven't tried to disable these as they are in constant use. Could this be the cause.
With the limit on the queue, what's the easiest way to check what the current storage quota is at?
Cheers all.

Setting journaling will make changes to the corresponding text file in the C:\Windows\System32\msmq\storage\lqs folder. Find the weirdly named text file that matches your private queue. It will contain "Journal=01" if journaling enabled and "JournalQuota=12345" for the journal storage limit. If journaling is disabled after you switch it on, check the date/time stamp on the file to see if it has been updated. If no change in date then MSMQ isn't writing the changes to the file for some reason. If the date is changed but the journal settings are off then there must be some other process telling MSMQ to disable journaling (but that would mean a process monitoring journaling to ensure it isn't switched on). Try manually editing the file yourself in a text editor and see what happens; would need to restart the service to make MSMQ pick up the change.

Related

Looking for windows event viewer system logs message templates , where can I get them?

I need to get hold of at least microsoft windows system events message templates ,
is there a place I can find those?
a template for example :
Windows cannot access the file gpt.ini for GPO CN={31B2F340-016D-11D2-945F-00C04FB984F9},CN=Policies,CN=System,DC=,DC=com. The file must be present at the location <\\sysvol\\Policies{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini>. (.). Group Policy processing aborted.
where the parameters are surrounded by tags.
Thanks you for your help.
Each event log in windows has it own registry entry, for example:
System event log has it entry at this path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Security
Under System key there are keys for each event source, that writes events into System event log.
Most of event sources also contain EventMessageFile value, which points you to a .dll or .exe that contain Message Tables inside it. That's actually yyou should look for. It would be useful to read about Event Logging in Windows.
Also there are some tools already, that allows you to see messages for a particular event source, unfortunately, I don't remember exact names of that utilities.

Concern on the MQ data backup during migration

i'm working on qmgr migration from 6.0 to 7.0, but i got problem when restoring V6.0 queue manager from 7.0 on windows. After re-installing MQ 6.0, i copied back the previous backup QMGR data and log, and then tried to start up that QMGR, for instance TEST01. However, that command strmqm TEST01 returns with no such QMGR existed.
The restore procedure i refer to is from infor center below
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp
and i backup and restore MQGR data and log through as below:
Backup
copy C:\Program Files (x86)\IBM\WebSphere MQ\Qmgrs\TEST01 under another path
copy C:\Program Files (x86)\IBM\WebSphere MQ\log\TEST01 under another path
Restore
copy above backup folder back to target path
So according to above operation, did i miss anything or do something wrong?
UPDATE:
This issue has been fixed. I forgot backing up the configuration information from the registry and restored it then. That's why MQ cannot recognize my QMGR at the very beginning.
Additionally, I've got another question here:
how to transfer configuration information from the registry to the mqs.ini file?
You are far better off not to migrate QMgrs but rather to create new ones at the new version. Although IBM has always provided an upgrade path, the implementation of certain functionality differs from version to version. For example, on Windows the registry settings in V6 are no longer used in V7.1 and higher. The requirement to upgrade usually comes from the belief that replacing the QMgr somehow loses something.
In fact, this is rarely the case. There is also nothing special about a QMgr that well-designed client applications would need to know its name. The host, port and channel uniquely identify a QMgr for a client application. If the app specifies the QMgr's name and it does not match, the connection fails. But the app can specify a blank QMgr name and the connection will succeed. The QMgr's name is automatically filled into the Reply-To QMgr field so requests are properly handled. The only thing that needs to know the name is a QRemote (which can be repointed) or a local app using bindings mode connection.
That said, to answer your question just performing the upgrade to V7.1 or V7.5 will move the QMgr's settings to the ini file.
this issue has been fixed. i forgot backing up the configuration information from the registry and restored it then. that's why MQ cannot recognize my QMGR at the very beginning.

Read application log written on Windows Azure

I have 10 applications they have same logic to write the log on a text file located on the application root folder.
I have an application which reads the log files of all the applicaiton and shows details in a web page.
Can the same be achieved on Windows Azure? I don't want to use the 'DiagnosticMonitor' API's. As I cannot change logging logic of application.
Thanks,
Aman
Even if technically this is possible, this is not advisable as the Fabric Controller can re-create any role at a whim (well - with good reasons, but unpredictable none-the-less) and so whenever this happens you will lose any files stored locally on a role.
So - primarily you should be looking for a different place to store those logs, and there are many options, but all require that you change the logging logic of the application.
You could do this, but aside from the issue Yossi pointed out (the log would be ephemeral; it could get deleted at any time), you'd have a different log file on each role instance (VM). That means when you hit your web page to view the log, you'd see whatever happened to be on the log on that particular VM, instead of what you presumably want (a roll-up of the log files across all VMs).
Windows Azure Diagnostics could help, since you can configure it to copy log files off to blob storage (so no need to change the logging). But honestly I find Diagnostics a bit cumbersome for this. It will end up creating a lot of different blobs, and you'll have to change the log viewer to read all those blobs and combine them.
I personally would suggest writing a separate piece of code that monitors the log file and, for each new line, stores the line as an entity (row) in table storage. This bit of code could be launched as a startup task and just run continuously as a separate process (leaving everything else unchanged). Then modify the log viewer to read the last n entities from table storage and display them.
(I'm assuming you can modify the log viewer even if you can't modify the apps that log to the file.)
What about writing logs to something like azure storage table? Just need to define unique ParitionKey/RowKey, then you can easily retrieve the log for the web page.

Is it possible to trash an Azure role host and get it started on the same host without cleanup?

Suppose my Azure role creates a lot of temporary files in Windows temporary folder and forgets to delete them. At some point it will receive "can't create temporary file" error. Suppose that once that happens my role code throws an exception out of RoleEntryPoint.Run() and the role is restarted.
I'm not talking about perfect Azure aware code here. My role might use third-party black box code that would now nothing about Azure and "local storage" and would just call System.IO.Path.GetTempPath() and thus create files right in some not Azure friendly location.
The problem is that if the role is started on the very same host and the temporary folder is not cleaned up by some third party the folder is still full of files and the role will be unable to function. According to this answer it might happen that local changes are preserved for my role which is a huge problem in the above scenario.
Are local changes like created temporary files guaranteed to be reset when a role is restarted? How do I ensure that the started role is in reasonably clean state?
The role gets reset on new deployments, upgrades, and newly scaled instances from the golden image (base guest OS vhd). Generally for reboots and crashes, you get the same VHD and machine.
The code you write will not have permission to write to the OS drive (D:) - without elevation that is (or logging in via RDP to do this). Further, there is a quota on the user's role root drive (E:) that will prevent you from accidentally filling the drive with files. This used to be 10% of the package size was all you were allowed to write. There is also a quota on the resource drive (C:), but that is much more generous and depends on VM size.
Nothing will be cleaned up on non-local resource drives but you will eventually get errors if you try to exceed quotas. You can turn off sticky storage on local resources and they will be cleaned up on reboot. Of course, like other changes to the disk, these non-local resource temp files will occasionally be lost when the guest OS is upgraded (or underlying root OS). If you are running elevated and really screw up your installation (which you can do), you will need to hit the "Reimage" button on the portal and it will all go back to the golden image.

What is good/free software for monitoring IIS in Windows Vista?

I always forget to check what's going on in IIS on our webservers, and am wondering: is there some stupid applet or something that always runs locally that I can click on to check event logs and IIS logs on a remote machine?
Mark
You can set up samurize to follow the output of the logging on the local and remote machines but it requires some setup.
You can use a remote shell utility such as OpenSSH to connect to remote machines securely.
One at a time. Compmgmt.msc -> connect to another computer.
But one at a time is boring. Monitoring dozens of machines? I've been using logparser from MS for my log monitoring needs. I run a query that dumps errors and warnings to a csv file a few times a day.
So far, I've only used it to aggregate event logs across the dozen servers in our QA environment, but it appears to take many forms on log input, including IIS. A pseudo log file query (don't have samples with me)
logparser "Select [eventtype], [message] into output.csv FROM \\server1\system, \\server2\system" -i EVT
This shows: You can aggregate multiple servers. You tell it the input format - it supports a dozen log types. You can dump it into a csv file. It looks sort of like SQL. This article in security focus has an IIS log sample.
I'm not an applet type of guy, so I haven't though much about desktop widgets to do this.

Resources