the simplest test to check ODP connectivity - oracle

I want to install a new version of ODP into a production environment and I'm looking for the simplest test that the drivers have actually gone on ok, and the bespoke apps on the server can still connect to the database.
Sounds easy, but there are some caveats...
First, one thing I need to do over-and-above the Oracle setup is to manually introduce a key into the registry, TNS_ADMIN. This is critical to the environment I'm installing to and when this key is missing, or the path is incorrect, this is the normal cause of problems. Effectively, this is what I'm actually looking to test.
Next, since these are production servers, there are no tools installed on them, so I can't just run up a copy of Toad, for example. The only truly safe assumption for the software present will be the operating system (Windows 2003) and the Oracle drivers (ODP 11.2 R3 which at the time of writing is Oracle's current production version).
Next, the bespoke apps on there are generally service-oriented, so simply saying "just run up one of the apps" might be easier said than done. Also on this point, it won't actually be me whose running these drivers in, but will be an operator who will have limited knowledge of what they're doing (sad but true). So whatever test I settle on, its got to be easy enough for the guy to follow, and easy enough for him to interpret the results.
Next, I'm fully aware I could write a 5-line test rig just to open and close a connection. This has the advantage of making life easy for the operator, and is definitely a fallback option, but can't help wondering if there is an easier approach.
I guess I'm just wondering whether anyone knows of some kind of utility, which more than likely ships with ODP, which will effect a connection test. Even if I end up giving the operator a .bat file to execute it'll be simpler (and less error prone???) than writing my own app.
Points for the best suggestion,
Pete

I don't think there is one in just ODP.net, no. At least, I don't see anything in the Entity Framework beta version of it (which I have installed).
In the larger driver packages you could use SQL*Plus, which is a command line tool. But for your purposes the simplest answer is likely to write a very small app that just connects and does a SELECT * FROM DUAL;

I got the operator to create an ODBC connection and to test it. I can do this just using Windows, no additional software required, they just need to make sure they use the right driver and have a valid database login to hand

Related

UiPath terminal connection - internal vs EHLLAPI?

I'm trying to automate in an AS400 terminal using UiPath.
I experience stability problems where the screen "blinks", which can cause errors. This outputs a trace log: "XMLScreen:Render BUGBUG XMLScreen.Field is blank".
I am connecting with UiPath internal and wondering if that might be the cause of my problem. I've searched for hours, but cant find any information on what the difference is between UiPath internal and IBM EHLLAPI. The only difference I know is that EHLLAPI uses an already existing terminal session.
Is one way of connecting generally a better choice than the other regarding stability and why?
All inputs are greatly appreciated! :)
The two options work completely differently.
EHLLAPI works against existing installed IBM i Access for Windows or IBM i Access Client Solutions (ACS) software. It is a very specific, solid, and well established IBM proprietary API that does not use Telnet in any way. You would need to ensure that EHLLAPI support was enabled (e.g. http://www-01.ibm.com/support/docview.wss?uid=nas8N1010639 for ACS).
Your organisation may perhaps be using a third party emulator, e.g. Rumba - I think EHLLAPI is supported by some of these.
The UIPath internal option starts and writes to a TN5250 session, over which it sounds from the documentation as if you have little control (e.g. re keyboard mappings).
I would suggest you go with EHLLAPI if you can (i.e. if you have a suitable IBM or third party product installed as above).
But, are you absolutely certain you need to screen scrape this at all? Do you have no access to the IBM i source code, which would potentially allow you to write a suitable program to run natively? I feel honour bound to say this, because there is always grief with screen scraping IBM i applications (e.g. panels appear that you are not expecting, especially at sign on time, or if an error occurs).

Get hardware id with inno setup script to prevent piracy

I just finished my program, now I want to build a setup with Inno-setup that gets the hardware id and and stores it into a file in the CD so that the program can be installed in only one computer with only one license. Unfortunately i am not good at all at Inno-setup scripting language. Anything you guys can do to help me will do, anything, even small hints,
Please help i am out of options right now.
I want to build a setup with Innosetup that gets the hardware id and and stores it into a file in the CD so that the program can be installed in only one computer with only one license.
You want to create a unique Installer and CD for every client?
Wow, that's a lot of work. It only makes sense for a really small business.
Anyway, in regard to getting a hardware-id:
There is no function in InnoSetup to get a "hardware id".
You probably mean some kind of identifier, like a hard-disk or motherboard serial number, right? You could decide to fetch some serial numbers or identifiers by querying the WMI.
But wait? You compile Innosetup on the developer machine, right?
The only hardware-ids you could possibly get at that time are IDs from your own developer machine. How do you get the hardware-id of your client, which is later trying to install your software from CD?
The whole approach doesn't make much sense and is flawed.
In general, doing this kind of protection in the installer is kind of useless.
Please handle your protection in the application, not in the setup.
You might use one of the following approaches: "API-Key" or "license-code" or "license file" or "hardware-dongle".
In other words: its always the same installer on multiple CDs, but the additional separate license code makes the difference - not during, but after the installation. The user simply enters the key or loads the license-file into the application and gets "Application registered to XY".

Executing a third-party compiled program on a client's computer

I'd like to ask for your advice about improving security of executing a compiled program on a client's computer. The idea is that we send a compiled program to a client but the program has been written and compiled by a third-party. How to make sure that the program won't make any harm to a client's operating system while running? What would be the best to achieve that goal and not decrease dramatically performance of executing a program?
UPDATE:
I assume that third-party don't want to harm client's OS but it can happen that they make some mistake or their program is infected by someone else.
The program could be compiled to either bytecode or native, it depends on third-party.
There are two main options, depending on whether or not you trust the third party.
If you trust the 3rd party, then you just care that it actually came from them, and that it hasn't changed in transit. Code signing is a good solution here. If the third party signs the code, and you check the signature, then you can check nothing has changed in the middle, and prove it was them who wrote it.
If you don't trust the third party, then it is a difficult problem. The usual solution is to run code in a "sandbox", where it is allowed to perform a limited set of operations. This concept has been implemented for a number of languages - google "sandbox" and you'll find a lot about it. For Perl, see SafePerl, for Java see "Java Permissions". Variations exist for other languages too.
Depending on the language involved and what kind of permissions are required, you may be able to use the language's built in sandboxing capabilities. For example, earlier versions of .NET have a "Trust Level" that can be set to control how much access a program has when it's run (newer versions have a similar feature called Code Access Security (CAS)). Java has policy files that control the same thing.
Another method that may be helpful is to run the program using (Microsoft) Sysinternals process monitor, while scanning all operations that the program is doing.
If it's developed by a third party, then it's very difficult to know exactly what it's going to do without reviewing the code. This may be more of a contractual solution - adding penalties into the contract with the third-party and agreeing on their liability for any damages.
sign it. Google for 'digital signature' or 'code signing'
If you have the resources, use a virtual machine. That is -- usually -- a pretty good sandbox for untrusted applications.
If this happens to be a Unix system, check out what you can do with chroot.
The other thing is that don't underestimate the value of thorough testing. you can run the app (in a non production environment) and verify the following (escalating levels of paranoia!)
CPU/Disk usage is acceptable
doesn't talk to any networked hosts it shouldn't do - i.e no 'phone home capability'
Scan with your AV program of choice
you could even hook up pSpy or something to find out more about what it's doing.
additionally, if possible run the application with a low privileged user. this will offer some degree of 'sandboxing', i.e the app won't be able to interfere with other processes
..also don't overlook the value of the legal contracts with the vendor that may often give you some kind of recompense if there is a problem. of course, choosing a reputable vendor in the first place offers a level of assurance as well.
-ace

Upgrade Oracle database from 9.2.0.7 to 9.2.0.8

We are planning to upgrade from Oracle 9.2.0.7 to 9.2.0.8. Main reason of the proposed upgrade is to address the issue in relation to exception "terminated with error: ORA-00904: "T2"."SYS_DS_ALIAS_4": invalid identifier" when we try to execute DBMS_STATS.GATHER_SCHEMA_STATS.
We are concerned that the proposed upgrade may have negative impact on our Java application or in the worst case may not even support by our Java application.
What are the possible approaches or strategies that we can take to ensure the upgrade from Oracle 9.2.0.7 to 9.2.0.8 will not have adverse impact on our Java application or will not cause our Java application to function incorrectly. Essentially we just want to confirm that our application will still support Oracle 9.2.0.8.
Thank you.
Your first step should be to ensure you set up a test system with your exact production layout and current software (9.2.0.7).
Run it for a bit to make sure it's okay, then perform the upgrade on your test system and run it for as bit longer to ensure it hasn't broken anything. I'm not talking about the cowboy-developer "if it runs for five minutes, that's okay" type of test. It should be a thorough test of all functionality and performance if possible.
Once you're happy with the level of testing, you can plan to do the same thing to production.
This isn't rocket science, you should always have a production mirror on which you can test upgrades of software, both your own and third-party stuff. And you should have backout strategies on the off-chance that the production upgrade fails anyway despite your testing.
We're pretty paranoid so we actually set up a whole new machine well in advance doing as much as possible. Then, at cutover time, we disable current production, perform whatever transfer is still required to the new machine, then bring that up and test it. If at any point during testing something cannot be fixed in the upgrade window, the old machine is put back online and we try again later, with appropriate kicks in the rear end for those responsible for the failure :-)
I've upvoted Paxdiablo's answer - there are few shortcuts around testing with as much application coverage as possible on a full-size copy of your production system.
I think you're generally looking to answer two questions with an upgrade:
Have new bugs been introduced in the
Oracle functionality used by the
application?
Have changes in the optimizer changed
the execution plans (for the worse!)
for any application queries?
I believe that as early as 9.2 the optimizer would include system stats in determination of execution plans, so you want to at least bring that information over into the test system to reduce that variable to the optimizer if your test system hardware differs from production.
If you upgrade to Oracle 11g and have the $$$$, you can license and configure Real Application Testing. This will let you essentially record and play back database activity in a test instance to answer these two questions.
In addition to the excellent answers by dpbradley & paxdiablo, before the database is patched it is worth looking on the Oracle support site, support.oracle.com, at the known issues that this patch may introduce which may stop you losing more than you gain!
You will need a valid support license to log into the Oracle support site but there is a note for 9.2.0.8 filed under:
9.2.0.8 Patch Set - Availability and Known Issues [ID 388281.1]

how do you troubleshoot with "works on my machine" scenarios

It happens lot of the time that when you report a bug to a developer, he comes back saying "it works on my system" though its a browser app. How do you go about sorting that out ?
From a training/process point of view:
Train your team to know that "works on my machine" is not a get-out-of-jail-free response.
Have an automated build server.
Have an automated test deployment.
Your developers must know that "works" is defined as "works on the test server", not just their machine.
From a testing/debugging point of view:
The developer needs to be shown the sequence of actions that result in the bug happening.
You might want to capture screenshots showing the bug, or possibly a video capture (using tools such as Camtasia). People can be quite bad at describing the sequence of actions that they performed on a system that led to a bug showing itself, so the more information you can capture about the bug and how to replicate itself the better.
From a development/environment point of view:
If there is genuinely a bug that exhibits itself on one environment but not the developer's then find out if it exhibits itself on no development environments, or just your one developer's.
From thereon in it is a case of trying to reduce the differences between the two environments so that your developer can see the issue on his machine.
Or you can go the other way and attempt to debug the issue on the production (non-development) environment.
Implementation details of these vary by platform.
You need to give as much information to the developer as possible. Even stuff that you don't think is relevant.
I can't count the number of times I've had a problem reported and couldn't repeat it, only to find out later a piece of information that the user hadn't originally included but was the key to unlocking the puzzle.
You also need to not accept that answer and say "well something must be different between your set up and mine, what can we do to sort it out".
We deal with that problem by having a development environment on top of the local development that is as close to the productive system as possible in terms of setup, hardware, etc. As a result almost all problems that occur in the production environment are reproducible on that development system even if they can't be reproduced on local developer machines.
This is a common escapist retort that I encounter from teams. My response usually is: "You know, your system isn't the production server and that's where it needs to work". In other words, that excuse simply isn't acceptable.
I also indicate to them the possibilities:
a. There is a configuration difference between the local system and the server.
b. Certain dependencies of the functionality are not updated on the server.
c. They haven't cleared their browser cache.
d. I replicate the problem on the Staging server and demonstrate it to them.
e. ... and so on, depending on the case.
Try to recreate the user that found the bug's system as much as possible: From server config to machine config including browser and OS and such. You should probably have several different setups on which to test your app before releasing.
IE Tester is a good tool for this kind of troubleshooting. If you need to test lots of browsers then virtual machines like Virtual PC are your best bet so you can have many client set-ups on your test server.
ahh yes... the oldest excuse in the book.
Assuming that both the developer and the tester is testing on the same server I would try to isolate the bug by identifying what the difference is between the developers machine and the testers machine. Could be a something minor like flash version, browser differences or forgetting to clear your browser cache
I would also recommend using an automated testing framework and test apps on a dedicated test server.
Not much you can do as an end user, but as a developer you can avoid a lot of these issues by including a lot of logging in the system - The differences the user will think of will just be the simple things you have tested already, but good logging lets you see exactly what was happening when the system failed. I've found quite a few bugs that couldn't possibly happen that way.

Resources