Clarion based application in WinServer 2016, frozen. Suggestions needed on tools we could use to gather more information - windows

I have a clarion application running in Win Server 2016 talking to a sybase DB, over the past few weeks we find the application gets frozen for different users at a given time. However the user can leave the session as such and start a new one and that works good. The users are known to use multiple instances of the same application in one remote server or on multiple servers. Having said that, I wanted to get more information on the freezeup and looked through the application event logs in the system where I see explorer.exe crashes but these correlate to the time of occurrence of the issue at certain times but not always, checked the DB transaction logs from Sybase and I do not find any crashes, errors or stuck connections. Having said that, since I have exhausted all possible options I am reaching out to you guys to know if there are any other places that i can look for to gather more information.
I would love to know of any application / tools that we could use to gather logs of a frozen clarion application on windows. Also good to know if anyone has faced such a situation and where and how have you guys looked into the issue.
Thanks in advance for your help in this.

Clarion's runtime library and database drivers expect a persistent connection. Disconnects that are normal with remote ODBC can cause a problem (including app hangs) unless you test for them at the ABC file mgr level and reconnect, or use similar steps to test and recover.
If you're looking for specifics about what's going on between the driver and the SQL backend, I suggest using Clarion's database driver trace facilities. From the help topic: "Logging Driver I/O for debugging":
To view the trace details in debugview, name the target trace file "DEBUG:"
Logging opens the named logfile for exclusive access. If the file exists, the new log data is appended to the file.
On Demand Logging
For on-demand logging you can use property syntax within your program to conditionally turn various levels of logging on and off. The logging is effective for the target table and any view for which the target table is the primary table.
file{PROP:Profile}=Pathname !Turns Clarion I/O logging on
file{PROP:Profile}="DEBUG:" !Turns Clarion I/O logging on and
!sends output via OutputDebugString()
!(viewable via debugview, etc)
file{PROP:Profile}='' !Turns Clarion I/O logging off
PathName = file{PROP:Profile} !Queries the name of the log file
file{PROP:Log}=string !Writes the string to the log file
file{PROP:Log}="DEBUG:" !Writes the string to the log file
file{PROP:Details}=1 !Turns Record Buffer logging on
fFile{PROP:Details}=0 !Turns Record Buffer logging off
where Pathname is the full pathname or the filename of the log file to create. If you do not specify a path, the driver writes the log file to the current directory.
You can also accomplish on demand logging with a SEND() command and the LOGFILE driver string. See LOGFILE for more information.
Example I use frequently, which was based on the help above:
SYSTEM{PROP:DriverTracing} = '1'
CRMNotes{PROP:TraceFile} = 'DEBUG:'
CRMNotes{PROP:Details}=1
CRMNotes{PROP:Profile}= 'DEBUG:'
CRMNotes{PROP:LogSQL} = 1

Related

OEM 13C Log File Monitoring

I have installed OEM 13c and deployed a couple of agents and want to test out the Log File Monitoring utility. I have enabled it and added a log file to monitor.
When I go and test it out, it does not show any alerts when they are put into the Log File. On the agent server, I have tailed the file and see the messages coming into the log file.
Does anyone have experience adding log files to OEM? I could have configured it wrong. Or is there any troubleshooting steps that I can follow to see if the server is even contacting the agent for reading the log file. Status of the agent is good with no incidents.
Without access to the system, it would be difficult to tell you the exact cause of this issue. However, I can list a few potential causes of this issue that I have experienced personally:
Permissions. The Oracle Enterprise Manager Agent is very convoluted when it comes to system permissions within a remote server. The agent can be owned and run as any number of users but during metric evaluation, may also need sudo or pam-authentication permissions to access certain entities on the server. Depending on the authentication profiles on that server, this could be the cause of your issue. There are ways to grant the agent access through the PAM stack if that is necessary.
Syntax. The wildcard syntax in the OEM GUI can be a little confusing as well. I would play with the wildcard elements a bit on the "String" component to ensure that it isn't as simple as adding wildcards to the beginning and end of the string. Without diving into the binaries of the agent plugins, it is difficult to assess exactly how the agent is evaluating this particular metric
One suggestion I would have is to go through the agent commands. There are specific commands you can run to manually force an agent to evaluate a particular metric for a particular target. This can allow you to manually trigger the metric collection locally on the server and evaluate what exactly is being performed at the agent level.
On the system I was running (12c) the command was as follows:
emctl control agent runCollection <hostname>:host host_storage

Can I delete .dmp and .phd files of Liberty Profile server?

In the folder <WAS Liberty Profile root>\<profile>\usr\servers\defaultServer there are many files named core.*.dmp and heapdump.*.phd. The size of these files is between 130 MB and 1.3 GB when my deployed app uses 4 MB.
Can I delete these files *.dmp and *.phd?
What are these files for?
Short answer: yes, it's safe to delete them, but you should find out why they're appearing, as it could indicate that your application is not running correctly.
If your dump files were created a long time ago, or you know you were debugging an OutOfMemoryException or have been running server javadump --include=heap,system then go ahead and delete the files. If, however, you keep getting new dump files and don't know why then read on.
The core and heapdump files contain a snapshot of the memory of the application from a specific point in time. Usually you do this to capture the state of your application at the point where something goes wrong so that you can examine it with analysis tools and try to work out what went wrong.
For example, by default the IBM JVM will perform a dump when an OutOfMemoryException is thrown. This allows you to look at the dump file and see what's using up all the memory.
If you have a corresponding javacore file, the fourth line or so should say why the memory dump was made.
e.g. 1TISIGINFO Dump Requested By User (00100000) Through com.ibm.jvm.Dump.javaDumpToFile (caused by running server javadump)
or 1TISIGINFO Dump Event "user" (00004000) received (caused by running kill -3)
If it's a "user" event, then something's asking the JVM to create a dump. If not, and you're still not sure what's causing it, check your jvm.options file for any -Xdump options which can be used to cause the JVM to create a dump in response to certain events. More information on that in the Knowledge Center.

Read application log written on Windows Azure

I have 10 applications they have same logic to write the log on a text file located on the application root folder.
I have an application which reads the log files of all the applicaiton and shows details in a web page.
Can the same be achieved on Windows Azure? I don't want to use the 'DiagnosticMonitor' API's. As I cannot change logging logic of application.
Thanks,
Aman
Even if technically this is possible, this is not advisable as the Fabric Controller can re-create any role at a whim (well - with good reasons, but unpredictable none-the-less) and so whenever this happens you will lose any files stored locally on a role.
So - primarily you should be looking for a different place to store those logs, and there are many options, but all require that you change the logging logic of the application.
You could do this, but aside from the issue Yossi pointed out (the log would be ephemeral; it could get deleted at any time), you'd have a different log file on each role instance (VM). That means when you hit your web page to view the log, you'd see whatever happened to be on the log on that particular VM, instead of what you presumably want (a roll-up of the log files across all VMs).
Windows Azure Diagnostics could help, since you can configure it to copy log files off to blob storage (so no need to change the logging). But honestly I find Diagnostics a bit cumbersome for this. It will end up creating a lot of different blobs, and you'll have to change the log viewer to read all those blobs and combine them.
I personally would suggest writing a separate piece of code that monitors the log file and, for each new line, stores the line as an entity (row) in table storage. This bit of code could be launched as a startup task and just run continuously as a separate process (leaving everything else unchanged). Then modify the log viewer to read the last n entities from table storage and display them.
(I'm assuming you can modify the log viewer even if you can't modify the apps that log to the file.)
What about writing logs to something like azure storage table? Just need to define unique ParitionKey/RowKey, then you can easily retrieve the log for the web page.

Transaction Log files in edb database

In my attempt to extract data (dumps and selective reading of columns) from a diverse collection of edb databases I got faced with a fundamental problem. I have an edb database coming with a couple of log files. I know what information there is within the database, but I just get half of it extracted. I fear that the remaining half sleeps somewhere in the log files. I assumed the EDB engine knows where the log files are and automagically loads them when attaching the database (JET_paramSystemPath, JET_paramLogFilePath and JET_paramBaseName are properly set). Is that a wrong assumption? If so, what should I do to have the logs loaded as well?
Alternatively, would it be possible to simply commit the transactions to the EDB file and get rid of the logs?
If there are uncommitted transactions then the database will be marked as 'inconsistent'. You can check this using ESENTUTL /MH against the database. Calling JetAttachDatabase against an inconsistent database will always fail.
So, if your program is able to attach and open the database then it is consistent. There are two ways a database can be made consistent:
A clean shutdown of ESENT.
Running recovery using the logfiles at JetInit time.
The first thing that JetInit does is to look for the logfiles specified by JET_paramLogFilePath and JET_paramBaseName. Logfiles contain the full paths of the database(s) they reference and the transactions in the logfiles are then committed to the database(s). So, if you set the system parameters properly then ESENT will load the logs when attaching the database.
On the other hand, if you don't set the parameters properly then your program will actually work on databases that don't require recovery. JetInit won't find any logfiles so it won't do anything and the attach will succeed because the database is consistent.
One further twist is that the logfiles contain the full path to the database. This means that if you have copied/moved the database then recovery will not work. Starting with Windows Server 2003 you can deal with this by setting JET_paramAlternateDatabaseRecoveryPath to the directory containing the database.
Important: to be safe you should always attach and open the database using the read-only flags. This will avoid any problems caused by bad logfile settings. A common problem is that read-only applications end up creating a set of logfiles in a different directory which prevent the database from being recovered properly.

What is good/free software for monitoring IIS in Windows Vista?

I always forget to check what's going on in IIS on our webservers, and am wondering: is there some stupid applet or something that always runs locally that I can click on to check event logs and IIS logs on a remote machine?
Mark
You can set up samurize to follow the output of the logging on the local and remote machines but it requires some setup.
You can use a remote shell utility such as OpenSSH to connect to remote machines securely.
One at a time. Compmgmt.msc -> connect to another computer.
But one at a time is boring. Monitoring dozens of machines? I've been using logparser from MS for my log monitoring needs. I run a query that dumps errors and warnings to a csv file a few times a day.
So far, I've only used it to aggregate event logs across the dozen servers in our QA environment, but it appears to take many forms on log input, including IIS. A pseudo log file query (don't have samples with me)
logparser "Select [eventtype], [message] into output.csv FROM \\server1\system, \\server2\system" -i EVT
This shows: You can aggregate multiple servers. You tell it the input format - it supports a dozen log types. You can dump it into a csv file. It looks sort of like SQL. This article in security focus has an IIS log sample.
I'm not an applet type of guy, so I haven't though much about desktop widgets to do this.

Resources