Best system test technique for new front end to legacy system - oracle

Aplogies in advance for asking a rather vague question ...
I need to test a new front end to a database. problems are 1) the db schema is huge with no documentation and b) there are many downstream systems - too many to build in test environment.
I was wondering if this appraoch may add value - 1) execute the same operations with a)new and then b) old frontend system (recording times when started / finished) then 2) Use LogMiner to query the redo log (using start and end times) and compare the changes to the db during a) and b).
Are there better appraches?
Matt

When testing, you need to define a successful test before you start. Meaning, you need to know what the end result should be based on your starting environment, your ending environment and actions you perform. Example: let's say you have an accounting system and you want to "test" a payment transaction from account X to account Y. When you start, you know the balance of X and Y. You run your test and send a $100 payment from X to Y. After the test, is X = X-100 and Y = Y+100?
In your case, I would:
1) Take a backup of the database. IE: start with a known, consistent state.
2) Run old processes.
3) Run reports on results.
4) Restore database from #1
5) Run new processes
6) Run reports
7) Compare reports from steps #3 and #6 and compare

Related

Access denied calling EnumJobs

I'm trying to get the domain username of jobs in a printer queue on Windows Server 2012 R2 Standard. Code snippet below is in Delphi. OpenPrinter and EnumJobs are part of the Windows Spooler API.
Update! Setting maxJobs to a higher multiple of 4 allows for more jobs in the queue to be enumerated. eg. Setting maxJobs=8 allows for two jobs, but not three. maxJobs=12 allows for three jobs.
Solved! It looks like I can just ignore the return value of EnumJobs, and simply see if the number of jobs it returns > 0 (the last argument when calling). This seems to work fine for all instances listed below, including printer via a share.
const
maxJobs = 4;
var
h : THandle;
jia : array [1..maxJobs] of JOB_INFO_1;
jiz, jic : DWord; // size of jia, count of jia
begin
if OpenPrinter('DocTest', h, nil) then
begin
if EnumJobs(h, 0, maxJobs, 1, #jia, SizeOf(jia), jiz, jic) then
[...]
EnumJobs returns true or false depending on different conditions listed below. If it returns false in any of the following situations, the error message I'm retrieving is "System Error. Code: 5. Access is denied".
Clearly a permissions problem. I have assigned Print, Manage this Printer, and Manage documents to Everyone in the printer security settings. All jobs have been submitted after those settings have been assigned. My program is running in a session logged in as the domain administrator.
EnumJobs returns TRUE if I print a job from the same session I'm running this program in, and there's only one job in the queue. (See Update above for change)
EnumJobs returns TRUE if I print from another session on the server (it has terminal services installed) as any user, and there's only one job in the queue. (See Update above for change)
EnumJobs returns FALSE if there is more than one job in the queue. It doesn't matter if the jobs are for the same user or not. (See Update above for change)
EnumJobs returns FALSE if I print a job from another server to the printer share. Both servers are in the same domain. It doesn't matter which user prints the job, including the domain administrator.
What's going on here, in particular getting an access denied when enumerating more than (maxJobs / 4) job(s) at a time?
Ignore the return value of EnumJobs and inspect the out argument pcReturned to see if it's greater than 0. This indicates the number of print jobs found.

Concurrency in Redis for flash sale in distributed system

I Am going to build a system for flash sale which will share the same Redis instance and will run on 15 servers at a time.
So the algorithm of Flash sale will be.
Set Max inventory for any product id in Redis
using redisTemplate.opsForValue().set(key, 400L);
for every request :
get current inventory using Long val = redisTemplate.opsForValue().get(key);
check if it is non zero
if (val == null || val == 0) {
System.out.println("not taking order....");
}
else{
put order in kafka
and decrement using redisTemplate.opsForValue().decrement(key)
}
But the problem here is concurrency :
If I set inventory 400 and test it with 500 request thread,
Inventory becomes negative,
If I make function synchronized I cannot manage it in distributed servers.
So what will be the best approach to it?
Note: I can not go for RDBMS and set isolation level because of high request count.
Redis is monothreaded, so running a Lua Script on it is always atomic.
You can define then a Lua script on your Redis instance and running it from your Spring instances.
Your Lua script would just be a sequence of operations to execute against your redis instance (the only one to have the correct value of your stock) and returns the new value for instance or an error if the value is negative.
Your Lua script is basically a Redis transaction, there are other methods to achieve Redis transaction but IMHO Lua is the simplest above all (maybe the least performant, but I have found that in most cases it is fast enough).

Managing shared resources with threads in Huey

I have to update many rows (increment one value in each rows) in peewee database (SqliteDatabase). Some objects can be uncreated so I have to create them with default values before working with them. I would use ways which are in peewee docs (Atomic updates) but I couldn't figure out how to mix model.get_or_create() and in [my_array].
So I decided to make queries in a transaction to commit it once at the end (I hope it does).
Why I'm writting in stack overflow is because I don't know how to work with db.atomic() with threading (I tested with 4 workers) in Huey because .atomic() locks the connection (peewee.OperationalError: database is locked). I've tried to use #huey.lock_task but it's not a solution of my problem as I've found.
Code of my class:
class Article(Model):
name = CharField()
mention_number = IntegerField(default=0)
class Meta:
database = db
Code of my task:
#huey.task(priority=30)
def update(names): # "names" is a list of strings
with db.atomic():
for name in names:
article, success = Article.get_or_create(name=name)
article.mention_number += 1
article.save()
Well, if you're using a recent version of Sqlite (3.24 or newer) you can use Postgres-style upsert queries. This is well supported by Peewee: http://docs.peewee-orm.com/en/latest/peewee/api.html#Insert.on_conflict
To answer the other question about shared resources, it's not clear from your example what you would like to happen... Sqlite only allows one write transaction at a time. So if you are running several threads, only one of them may be writing at any given time.
Peewee stores database connections in a thread local, so Peewee databases can be safely used in multithreaded applications.
You didn't mention why huey lock_task wouldn't work.
Another suggestion is to try using WAL-mode with Sqlite, as WAL-mode allows multiple reader transactions to co-exist with a single writer.

Sequel + ADO + Puma is not threading queries

We have a website running in Windows Server 2008 + SQLServer 2008 + Ruby + Sinatra + Sequel/Puma
We've developed an API for our website.
When the access points are requested by many clients, at the same time, the clients start getting RequestTimeout exceptions.
I investigated a bit, and I noted that Puma is managing multi threading fine.
But Sequel (or any layer below Sequel) is processing one query at time, even if they came from different clients.
In fact, the RequestTimeout exceptions don't occur if I launch many web servers, each one listening one different port, and I assign one different port to each client.
I don't know yet if the problem is Sequel, ADO, ODBC, Windows, SQLServer or what.
The truth is that I cannot switch to any other technology (like TinyTDS)
Bellow is a little piece of code with screenshots that you can use to replicate the bug:
require 'sinatra'
require 'sequel'
CONNECTION_STRING =
"Driver={SQL Server};Server=.\\SQLEXPRESS;" +
"Trusted_Connection=no;" +
"Database=pulqui;Uid=;Pwd=;"
DB = Sequel.ado(:conn_string=>CONNECTION_STRING)
enable :sessions
configure { set :server, :puma }
set :public_folder, './public/'
set :bind, '0.0.0.0'
get '/delaybyquery.json' do
tid = params[:tid].to_s
begin
puts "(track-id=#{tid}).starting access point"
q = "select p1.* from liprofile p1, liprofile p2, liprofile p3, liprofile p4, liprofile p5"
DB[q].each { |row| # this query should takes a lot of time
puts row[:id]
}
puts "(track-id=#{tid}).done!"
rescue=>e
puts "(track-id=#{tid}).error:#{e.to_s}"
end
end
get '/delaybycode.json' do
tid = params[:tid].to_s
begin
puts "(track-id=#{tid}).starting access point"
sleep(30)
puts "(track-id=#{tid}).done!"
rescue=>e
puts "(track-id=#{tid}).error:#{e.to_s}"
end
end
There are 2 access points in the code above:
delaybyquery.json, that generates a delay by joining the same table 5
times. Note that the table must be about 1000 rows in order to get the
query working really slow; and
delaybycode.json, that generates a delay by just calling the ruby sleep
function.
Both access points receives a tid (tracking-id) parameter, and both write the
outout in the CMD, so you can follow the activity of both process in the same
window and check which access point is blocking incoming requests from other
browsers.
For testing I'm opening 2 tabs in the same chrome browser.
Below are the 2 testings that I'm performing.
Step #1: Run the webserver
c:\source\pulqui>ruby example.app.rb -p 81
I get the output below
Step #2: Testing Delay by Code
I called to this URL:
127.0.0.1:81/delaybycode.json?tid=123
and 5 seconds later I called this other URL
127.0.0.1:81/delaybycode.json?tid=456
Below is the output, where you can see that both calls are working in parallel
click here to see the screenshot
Step #3: Testing Delay by Query
I called to this URL:
127.0.0.1:81/delaybyquery.json?tid=123
and 5 seconds later I called this other URL
127.0.0.1:81/delaybyquery.json?tid=456
Below is the output, where you can see that calls are working 1 at time.
Each call to an access point is finishing with a query timeout exception.
click here to see the screenshot
This is almost assuredly due to win32ole (the driver that Sequel's ado adapter uses). It probably doesn't release the GVL during queries, which would cause the issues you are seeing.
If you cannot switch to TinyTDS or switch to JRuby, then your only option if you want concurrent queries is to run separate webserver processes, and have a reverse proxy server dispatch requests to them.

Guaranteeing the two items are saved or that none are -- across servers

Let's say that process P running on some machine has some data X that must be saved in MySql_X (on a different machine) and some data Y that must be saved in Mysql_Y (on yet another machine). How can I guarantee that either both X and Y are saved or otherwise neither are saved? What's the algorithm?
For instance (in silly pseudo code):
X_transaction = MySql_X.startTransaction(save(X)) //returns null if failure to start transaction
Y_transaction = MySql_Y.startTransaction(save(Y))
if X_transaction && Y_transaction {
X_transaction.commit()
// what if failure happened here?
Y_transaction.commit()
}
This obviously wouldn't work. Because failure could happen in P or MySql_Y on the line noted above and then only X would be saved.

Resources