I am using the PL/SQL Package OS_COMMAND (which itself uses Java) to execute shell commands. I can observe their return codes.
I want to determine whether I am operating on Windows or any other operating system.
I've come up with different approaches:
My first idea: execute a specific Windows command (which should
always be successful) and check the return code: 0 means Windows,
anything else means other OS.
Using a Java Stored Procedure
Using database information, for example
SELECT platform_id, platform_name FROM v$database
SELECT dbms_utility.port_string FROM DUAL
SELECT NAME FROM v$dbfile and check the format
Which one would you consider the "safest"? Do you use other approaches? What are advantages/disadvantages?
I'd like to avoid the Java Stored Procedure, and I don't know exactly how to interpret the database information (how to systematically check for Windows: result containing 'WIN' or 'Windows', or ...?). If you'd check with a specific Windows command, which one should I use?
I'll be glad about advice in any direction.
A somewhat late response but anyway here are my two cents.
I agree with you that it seems best to get the information directly from the database environment to avoid unnecessary calls between "environments". So that brings us directly to the 3 mentionned options:
v$database: well you can rely on the information and from that point of view this seems a pretty "safe" solution. On the downside, you will need at least select privilege on the underlying table for this to work and that might be an issue in certain environments.
dbms_utility.port_string: no worries about privileges here but than again you don't get as much info as with option 1. Please also keep in mind to only use this for OS determination and not oracle version.
v$dbfile: well this is a deprecated view so first of all better to use v$datafile. But then comes in an important downside with this approach: what are you going to do when ASM is used? If I'm not mistaken the name format is the same on all OS platforms when using ASM.
Cheers
Björn
Related
This is a very simple question about Screen.DeviceName
Does it always return "\.\DISPLAY1" to "\.\DISPLAY"n ? or are there any values except from those?
I have multiple monitors and I have like "\.\DISPLAY1" "\.\DISPLAY2" "\.\DISPLAY3"
Under the covers this just calls the GetMonitorInfo Windows API function. There is no mention of a specific format that the name will always be in in the documentation.
The device name itself being returned is therefore an implementation detail of the OS and potentially liable to change between releases, or in a software update. I'd strongly recommend you don't assume it will always be in that format, even if - for now - it is.
I need to create a simple module/executable that can print checks (fill out the details). The details need to be retried from an existing Oracle 9i DB on the Windows(xp or later)
Obviously, I shall need to define the pixel format as to where the details (Name, amount, etc) are to be filled.
The major constraint is that the client needs / strongly prefers a executable , not code that is either interpreted or uses a VM. This is so that installation is extremely simple. This requirement really cannot be changed.
Now, the question is, how do I do it.
(.NET, java and python are out of the question, unless there is a way around the VMs)
I have never worked with MFC or other native windows APIs. I am also unfamiliar with GDI.
Do I have any other option? Any language that can abstract the complexities and can be packed into a x86 binary?
Also, if not then any code help with GDI would be appreciated.
The most obvious possibilities would probably be C, C++, and Delphi. There are a few others such as Ada (e.g., Gnat), but offhand I don't see a lot of reason to favor them (especially for a job this small).
At least the way I'd write this, the language would be almost irrelevant. I'd have it run almost entirely by an external configuration file that gave the name of each field, and the location where it should be printed. I'd probably use something like MM_LOMETRIC mapping mode, so Windows will handle most of the translation to real-world coordinates (and use tenths of a millimeter in the configuration file, so you can use the coordinates without any translation).
Probably the more difficult part of this would/will be the database connectivity. There are various libraries around to help out with that, so this won't be terribly difficult, but it's still not (quite) as trivial as the drawing part.
I encountered a new term called 'NAILDUMPS' when I analysed a flowchart explaining a series of jcls.In some steps of that flowchart it is mentioned as"this file is naildumped" .Can anyone explain what is a naildump and why it is used?.
Thanks in advance
In all my travels through the mainframe world, I've never heard this term, not with Fault Analyser (or its competition) or with system abend stuff, where you'd expect to find it.
Most likely thing is that it's an application specific thing. If you could provide the context around the comment in the JCL, such as a program name like IEBGENER or IEFBR14 (with the options), it may be easier to tell you what it's doing.
For what it's worth (a), there's one page that Google serves up showing one use for this elusive program. The link states that, to empty a dataset, you can use:
//STEP01 EXEC PGM=NAILDUMP
//FILE DD DSN=your filename,DISP=SHR
in your JCL. But given the scarcity of information on this program, the fact it doesn't seem to appear in any of the IBM z/OS docs and the fact that there are perfectly good supported ways of doing this, I'd still maintain that it's some in-house utility. Ask your local sysprogs - even if they don't know, they should be able to see inside the JCL member.
(a) It's probably not worth much since there are all sorts of wonderful things you can do with JCL just by specifying DD commands, even with programs that do absolutely nothing, a la the infamous IEFBR14 program.
NAILDUMP is not a "normal" name for any standard IBM Mainframe (zos) utility program.
This leaves three possibilities. NAILDUMP could be:
a locally developed program, in which case you need to find the local documentation (good luck!).
a catalogued procedure fronting a standard utility. For example, DFSORT is a catalogued procedure used in many shops to front the standard system sort program.
an alias for another program. For example, ICEMAN is a commonly defined alias for the standard system sort program.
If you had access to the mainframe (or can find someone who does)
the ISRDDN utility under TSO can be used to find the actual program load module
that NAILDUMP relates to provided it is a locally developed program or is an alias for some other
standard utility program. This link
gives a brief explanation of how to do it.
If it is a catalogued procedure you can find it by searching for a member named NAILDUMP in the system default
catalogued procedure library or those specified in the JCL.
Getting to the real name can be a bit of a challenge, but once you get there it should become clear what it is being used for through context.
It seems this a case when the author who made a document is very familiar with some term ( "naildump") but not to the audience of the document.
I think you should first ask the author for clarification because even if someone answers to you what it supposes to mean they could be wrong for that case in particular.
Given your little context makes a little sense that "NAILDUMP" empties the dataset or delete it.
I am aware that this is nothing new and has been done several times. But I am looking for some reference implementation (or even just reference design) as a "best practices guide". We have a real-time embedded environment and the idea is to be able to use a "debug shell" in order to invoke some commands. Example: "SomeDevice print reg xyz" will request the SomeDevice sub-system to print the value of the register named xyz.
I have a small set of routines that is essentially made up of 3 functions and a lookup table:
a function that gathers a command line - it's simple; there's no command line history or anything, just the ability to backspace or press escape to discard the whole thing. But if I thought fancier editing capabilities were needed, it wouldn't be too hard to add them here.
a function that parses a line of text argc/argv style (see Parse string into argv/argc for some ideas on this)
a function that takes the first arg on the parsed command line and looks it up in a table of commands & function pointers to determine which function to call for the command, so the command handlers just need to match the prototype:
int command_handler( int argc, char* argv[]);
Then that function is called with the appropriate argc/argv parameters.
Actually, the lookup table also has pointers to basic help text for each command, and if the command is followed by '-?' or '/?' that bit of help text is displayed. Also, if 'help' is used for a command, the command table is dumped (possible only a subset if a parameter is passed to the 'help' command).
Sorry, I can't post the actual source - but it's pretty simple and straight forward to implement, and functional enough for pretty much all the command line handling needs I've had for embedded systems development.
You might bristle at this response, but many years ago we did something like this for a large-scale embedded telecom system using lex/yacc (nowadays I guess it would be flex/bison, this was literally 20 years ago).
Define your grammar, define ranges for parameters, etc... and then let lex/yacc generate the code.
There is a bit of a learning curve, as opposed to rolling a 1-off custom implementation, but then you can extend the grammar, add new commands & parameters, change ranges, etc... extremely quickly.
You could check out libcli. It emulates Cisco's CLI and apparently also includes a telnet server. That might be more than you are looking for, but it might still be useful as a reference.
If your needs are quite basic, a debug menu which accepts simple keystrokes, rather than a command shell, is one way of doing this.
For registers and RAM, you could have a sub-menu which just does a memory dump on demand.
Likewise, to enable or disable individual features, you can control them via keystrokes from the main menu or sub-menus.
One way of implementing this is via a simple state machine. Each screen has a corresponding state which waits for a keystroke, and then changes state and/or updates the screen as required.
vxWorks includes a command shell, that embeds the symbol table and implements a C expression evaluator so that you can call functions, evaluate expressions, and access global symbols at runtime. The expression evaluator supports integer and string constants.
When I worked on a project that migrated from vxWorks to embOS, I implemented the same functionality. Embedding the symbol table required a bit of gymnastics since it does not exist until after linking. I used a post-build step to parse the output of the GNU nm tool for create a symbol table as a separate load module. In an earlier version I did not embed the symbol table at all, but rather created a host-shell program that ran on the development host where the symbol table resided, and communicated with a debug stub on the target that could perform function calls to arbitrary addresses and read/write arbitrary memory. This approach is better suited to memory constrained devices, but you have to be careful that the symbol table you are using and the code on the target are for the same build. Again that was an idea I borrowed from vxWorks, which supports both teh target and host based shell with the same functionality. For the host shell vxWorks checksums the code to ensure the symbol table matches; in my case it was a manual (and error prone) process, which is why I implemented the embedded symbol table.
Although initially I only implemented memory read/write and function call capability I later added an expression evaluator based on the algorithm (but not the code) described here. Then after that I added simple scripting capabilities in the form of if-else, while, and procedure call constructs (using a very simple non-C syntax). So if you wanted new functionality or test, you could either write a new function, or create a script (if performance was not an issue), so the functions were rather like 'built-ins' to the scripting language.
To perform the arbitrary function calls, I used a function pointer typedef that took an arbitrarily large (24) number of arguments, then using the symbol table, you find the function address, cast it to the function pointer type, and pass it the real arguments, plus enough dummy arguments to make up the expected number and thus create a suitable (if wasteful) maintain stack frame.
On other systems I have implemented a Forth threaded interpreter, which is a very simple language to implement, but has a less than user friendly syntax perhaps. You could equally embed an existing solution such as Lua or Ch.
For a small lightweight thing you could use forth. Its easy to get going ( forth kernels are SMALL)
look at figForth, LINa and GnuForth.
Disclaimer: I don't Forth, but openboot and the PCI bus do, and I;ve used them and they work really well.
Alternative UI's
Deploy a web sever on your embedded device instead. Even serial will work with SLIP and the UI can be reasonably complex ( or even serve up a JAR and get really really complex.
If you really need a CLI, then you can point at a link and get a telnet.
One alternative is to use a very simple binary protocol to transfer the data you need, and then make a user interface on the PC, using e.g. Python or whatever is your favourite development tool.
The advantage is that it minimises the code in the embedded device, and shifts as much of it as possible to the PC side. That's good because:
It uses up less embedded code space—much of the code is on the PC instead.
In many cases it's easier to develop a given functionality on the PC, with the PC's greater tools and resources.
It gives you more interface options. You can use just a command line interface if you want. Or, you could go for a GUI, with graphs, data logging, whatever fancy stuff you might want.
It gives you flexibility. Embedded code is harder to upgrade than PC code. You can change and improve your PC-based tool whenever you want, without having to make any changes to the embedded device.
If you want to look at variables—If your PC tool is able to read the ELF file generated by the linker, then it can find out a variable's location from the symbol table. Even better, read the DWARF debug data and know the variable's type as well. Then all you need is a "read-memory" protocol message on the embedded device to get the data, and the PC does the decoding and displaying.
I am not a DBA. However, I work on a web application that lives entirely in an Oracle database (Yes, it uses PL/SQL procedures to write HTML to clobs and then vomits the clob at your browser. No, it wasn't my idea. Yes, I'll wait while you go cry.).
We're having some performance issues, and I've been assigned to find some bottlenecks and remove them. How do I go about measuring Oracle performance and finding these bottlenecks? Our unhelpful sysadmin says that Grid Control wasn't helpful, and that he had to rely on "his experience" and queries against the data dictionary and "v$" views.
I'd like to run some tests against my local Oracle instance and see if I can replicate the problems he found so I can make sure my changes are actually improving things. Could someone please point me in the direction of learning how to do this?
Not too surprising there are entire books written on this topic.
Really what you need to do is divide and conquer.
First thing is to just ask yourself some standard common sense questions. Has performance slowly degraded or was there a big drop in performance recently is an example.
After the obvious a good starting point for you would be to narrow down where to spend your time - top queries is a decent start for you. This will give you particular queries which run for a long time.
If you know specifically what screens in you front-end are slow and you know what stored procedures go with that, I'd put some logging. Simple DBMS_OUTPUT.put_lines with some wall clock information at key points. Then I'd run those interactively in SQLNavigator to see what part of the stored procedure is going slow.
Once you start narrowing it down you can look to evaluate why a particular query is going slow. EXPLAIN_PLAN will be your best friend to start with.
It can be overwhelming to analyze database performance with Grid Control, and I would suggest starting with the simplier AWR report - you can find the scripts to generate them in $ORACLE_HOME/rdbms/admin on the db host. This report will rank the SQL seen in the database by various categories (e.g. CPU time, disk i/o, elapsed time) and give you an idea where the bottlenecks are on the database side.
One advantage of the AWR report is that it is a SQL*Plus script and can be run from any client - it will spool HTML or text files to your client.
edit:
There's a package called DBMS_PROFILER that lets you do what you want, I think. I found out my IDE will profile PL/SQL code as I would guess many other IDE's do. They probably use this package.
http://www.dba-oracle.com/t_dbms_profiler.htm
http://www.databasejournal.com/features/oracle/article.php/2197231/Oracles-DBMSPROFILER-PLSQL-Performance-Tuning.htm
edit 2:
I just tried the Profiler out in PL/SQL Developer. It creates a report on the total time and occurrences of snippets of code during runtime and gives code location as unit name and line number.
original:
I'm in the same boat as you, as far as the crazy PL/SQL generated pages go.
I work in a small office with no programmer particularly versed in advanced features of Oracle. We don't have any established methods of measuring and improving performance. But the best bet I'd guess is to try out different PL/SQL IDE's.
I use PL/SQL Developer by Allaround Automations. It's got a testing functionality that lets you debug your PL/SQL code and that may have some benchmarking feature I haven't used yet.
Hope you find a better answer. I'd like to know too. :)
"I work on a web application that
lives entirely in an Oracle database
(Yes, it uses PL/SQL procedures to
write HTML to clobs and then vomits
the clob at your browser"
Is it the Apex product ? That's the web application environment now included as standard part of the Oracle database (although technically it doesn't spit out CLOBs).
If so there is a whole bunch of instrumentation already built in to the product/environment (eg it keeps a rolling two-week history of activity).