Single user multi login prevention upon browser close - oracle

I have a table with a field 'loginStatus'. Now everytime a user logs in, the value is set to 1 and after clicking on logout the value is set to 0. Now when a user tries to log in, the value of that field is checked, If it is 0 then the user can log in, if it is 1 then the user cannot login. Now if by any mean the browser is closed, the user cannot login with that userID. Because the value of that field is still 1(he hasn't clicked the logout button so it is not changed). My application is running fine unless the user closes the browser.
I know this issue can be solved differently but I have been asked to do it this way. Now the problem is I am not that much pro in Java EE so multiple help with explanations are exactly what I am looking for.
Also I have a possible solution which is like : creating a database trigger to change the loginStatus value to 0 which will be triggered after, say 15 mins, as the user logs in. Now I also dont know how to create that kind of trigger that will trigger after specific time.

If you've had this requirement forced on you, you can automatically expire accounts without having to run any job.
Instead of a simple "on/off" flag, have a date/timestamp on the table which is set to the current date/time. Every now and then when the user hits the server with a request, you'd update this column to the current time.
If a second session tries to login, that session should check the date/timestamp on the table, and if it's more than 15 minutes ago, the login is allowed; otherwise it is blocked.

You could create a database job that runs periodically and expires old sessions. Depending on the version of Oracle you're using, you can either use the DBMS_JOB package or the more sophisticated DBMS_SCHEDULER package. DBMS_JOB is an older package but for relatively simple and isolated tasks like this, there is less of a learning curve. For example, if you have a stored procedure UNLOCK_ACCOUNTS that, when executed, determines which accounts to unlock and unlocks them, you can use DBMS_JOB to have that procedure run every 15 minutes
DECLARE
l_jobno INTEGER;
BEGIN
dbms_job.submit( l_jobno,
'BEGIN unlock_accounts; END;',
sysdate + interval '15' minute,
'sysdate + interval ''15'' minute' );
commit;
END;
Of course, you could also use a Java scheduler (Quartz is a popular one) or the DBMS_SCHEDULER package to do the same thing. This does require, however, that there is a field somewhere that stores the login timestamp so that the UNLOCK_ACCOUNTS procedure can figure out which logins happened more than 15 minutes ago.
Generally, however, this entire architecture is rather suspect. It's pretty odd that you'd want to have a web-based application (which is inherently stateless) deny logins because the user had opened another browser at some earlier point in time. It's relatively common to time out sessions if they have been inactive for a while as a security matter, but 15 minutes is generally way too short for that sort of thing-- even bank web sites generally allow you to be idle longer than that. And this approach doesn't even appear to prevent you from being logged in from multiple browsers/ computers at the same time so long as the logins happened to come more than 15 minutes apart.

Related

Create Oracle DB user with expiration time

I would like to create an Oracle DB user and I would like to disable him in exactly 8 hours.
I don't care if a user just gets locked or if all of his roles are revoked, I just want
to prevent him from doing any activities on DB exactly 8 hours after his DB user was created.
Does Oracle provide such option out of the box ?
If not, I might go with the following solution:
create a table where all newly created DB users are stored (with DB user creation time)
create a trigger on Create user, so I save DB username and his creation time in my table
create a function / job that checks my table every 5 minutes if there's any user older than 8 hours and it locks him
My proposed solution is very nasty so I really hope there's a better solution for my issue.
How about creating a profile
which is a set of limits on database resources. If you assign the profile to a user, then that user cannot exceed these limits.
Especially check the following parameters:
CONNECT_TIME: Specify the total elapsed time limit for a session, expressed in minutes
PASSWORD_LIFE_TIME: Specify the number of days the same password can be used for authentication. If you also set a value for PASSWORD_GRACE_TIME, then the password expires if it is not changed within the grace period, and further connections are rejected. If you omit this clause, then the default is 180 days

Update database records based on date column

I'm working on a app where I have some entities in the database that have a column representing the date until that particular entity is available for some actions. When it expires I need to change it's state, meaning updating a column representing it's state.
What I'm doing so far, whenever I ask the database for those entities to do something with them, I first check if they are not expired and if they are, I update them. I don't particularly like this approach, since that means I will have a bunch of records in the database that would be in the wrong state just because I haven't queried them. Another approach would be to have a periodic task that runs over those records and updates them as necessary. That I also don't like since again, I would have records in a inconsistent state and in this case, the first approach seems more reasonable.
Is there another way of doing this, am I missing something? I need to mention, I use spring-boot + hibernate for my application. The underlying db is Postgresql. Is there any technology specific trick I can use to obtain what I want?
in database there it no triger type expired. if you have somethind that expired and you should do somethig with that there is two solutions (you have wrote about then) : do some extra with expired before you use data , and some cron/task (it might be on db level or on server side).
I recomend you use cron approach. Here is explanation :
do something with expired before you get data :
updated before select
+: you update expired data before you need it , and here are questions - update only that you requested or all that expired... update all might be time consumed in case if from all records you need just 2 records and updated 2000 records that are not related you you working dataset.
-: long time to update all record ; if database is shared - access to db not only throth you application , logic related to expired is not executed(if you have this case); you need controll entry point where you should do something with expired and where you shouldn't ; if time expired in min , sec - then even after you execure logic for expired , in next sec new records might be expired too;also if you need update workflow logic for expired data handling you need keep it in one plase - in cron , in case with update before you do select you should update changed logic too.
CRON/TASK
-: you should spend time to configure it just once 30-60 mins max:) ;
+: it's executed in the background ; if your db is used not only by your application , expired data logic also be available; you don't have to check(and don't rememebr about it , and explaine about for new employee....) is there any staled data in your java code before select something; you do split logic between cares about staled data , and normal queries do db .
You can execute 'select for update' in cron and even if you do select during update time from server side query you will wait will staled data logic complets and you get in select up to date data
for spring :
spring scheduling documentation , simple example spring-quartz-schedule
for db level postgresql job scheduler
scheduler/cron it's best practices for such things

can't find what I want in logminer, in fact, can't find anything recent

I am the sysadmin for a school, so I'm an IT generalist, jack of all trades, master of none, right? Our student information system runs on top of Oracle 11g. Would like to know how to use logminer to find out, at the very least, when something was changed in the database that shouldn't have been changed.
I have configured a test server to play with, so rest your mind, our production system isn't at risk while I play here.
The server is Windows. I go to a command prompt, type sqlplus / as sysdba.
Execute dbms.logmnr.addlogfile blah, blah multiple times to add the log files.
alter session set NLS_DATE_FORMAT = 'mm-dd-yyyy HH24:mi:ss'; so the time stamps tell me more than just the date.
Then I go to the application on my test server and make a change to a student demographic record. I want to find this change using logminer.
I do a select timestamp,sql_undo from V$LOGMNR_CONTENTS WHERE TIMESTAMP > TO_DATE('04-11-2013 11:59:00'); (I made the change just now, around 3 pm)
I get no rows.
If I do the same thing, but with a time just after midnight, I get thousands of rows, as the app has routines that kick off at midnight doing maintenance, like recalculating student's class ranks, for instance.
So why am I not finding the change I made logged? I believe I'm looking in the right log files, or I wouldn't see the activity at midnight.
Though your latest entry is recorded it won't appear in V$LOGMNR_CONTENTS till it has sufficient number of updates recorded. For example, if you do 100 updates, you may get 80. To flush out the remaining 20, you need to have some more updates done so that you can see them again. We had a similar problem where logminer was particularly not showing latest updates especially if they are very few. We had to create a dummy table to create some updates regularly so that logminer is always actively showing the updates and nothing is stored in buffer. In our usecase creating the dummy table was ok, I am not sure if it's ok in your case.

DBMS_JOB usage: 'Singular' job keeps recurring every five seconds

I learned (the hard way) that DDL statements cannot be executed in the non-transactional context of a logon trigger, and that the solution is a job. I want the job to be executed immediately and one single time and therefore set the next_date parameter to sysdate and the interval parameter to null.
Here is what I execute in the trigger:
dbms_job.submit(
job=>jobNumber,
what=>'someProcedure(someParameter);',
next_date=>sysdate,
interval=>null
);
This works quite good, but once the trigger has been fired (and the above command has been submitted) for the time, the audit log shows that the job keeps reappearing exactly every five seconds under the same user account it has been submitted under for the first time. The associated program is always something like ORACLE.EXE (J001), although it was of course a user session initiated from a client application.
Can anyone explain this to me please? I hoped to get a singular execution of the job and not an eternal recurrence. Thanks in advance!

Is there a way to peek inside of another Oracle session?

I have a query editor (Toad) looking at the database.
At the same time, I am also debugging an application with its own separate connection.
My application starts a transaction, does some updates, and then makes decisions based on some SELECT statements. Because the update statements (which are many and complex) are not committed yet, the results my application gets from its SELECT are not the same as what I get if I run the same statement in Toad.
Currently I get around this by dumping the query output from the app into a text file, and reading that.
Is there a better way to peek inside another oracle session, and see what that session sees, before the commit is complete?
Another way to ask this is: Under Oracle, can I enable dirty reads between only two sessions, without affecting anyone else's session?
No, Oracle does not permit dirty reads. Also, since the changes may not have physically been written to disk, you won't find them in the data files.
The log writer will write any pending data changes at least every three seconds, so you may be able to use the Log Miner stuff to pick it out from there.
But in general, your best bet is to include your own debugging information which you can easily switch on and off as required.
It's not a full answer I know, but while there are no dead reads, there are locks that can give you some idea what is going on.
In session 1 if you insert a row with primary key 7, then you will not see it when you select from session 2. (That would be a dirty read).
However, if you attempt an insert from session 2 using the primary key of 7 then it will block behind session 1 as it has to wait and see if session 1 will commit or rollback. You can use "WAIT 10" to wait 10 seconds for this to happen.
A similar story exists for updates or anything that would cause a unique constraint violation.
Can you not just set the isolation level in the session you want to peak at to 'read uncommitted' with an alter session command or a logon trigger (I have not tried this myself) temporarily?
What I prefer to do (in general) is place debug statements in the code that remain there permanently, but are turned off in production - Tom Kyte's debug.f package is a useful place to start - http://asktom.oracle.com/tkyte/debugf

Resources