I would like to create an Oracle DB user and I would like to disable him in exactly 8 hours.
I don't care if a user just gets locked or if all of his roles are revoked, I just want
to prevent him from doing any activities on DB exactly 8 hours after his DB user was created.
Does Oracle provide such option out of the box ?
If not, I might go with the following solution:
create a table where all newly created DB users are stored (with DB user creation time)
create a trigger on Create user, so I save DB username and his creation time in my table
create a function / job that checks my table every 5 minutes if there's any user older than 8 hours and it locks him
My proposed solution is very nasty so I really hope there's a better solution for my issue.
How about creating a profile
which is a set of limits on database resources. If you assign the profile to a user, then that user cannot exceed these limits.
Especially check the following parameters:
CONNECT_TIME: Specify the total elapsed time limit for a session, expressed in minutes
PASSWORD_LIFE_TIME: Specify the number of days the same password can be used for authentication. If you also set a value for PASSWORD_GRACE_TIME, then the password expires if it is not changed within the grace period, and further connections are rejected. If you omit this clause, then the default is 180 days
Related
i'm using laravel 5.7
i need to get the exchange rates and I want to create a scheduler and run every 5 minutes
Now, do you think it's possible to put the currency information in a config file or save it to a database and, of course, i think the cache could be, right?
In your situation the data will be changed only in 5 minutes. So when you get rates just store in the database and then get new values and cache them and give the user rates from cached information. But when the next 5 minutes left and you get newest rates then you must clear existing cache, store new values into database then get new values from database and store into cache and again give the rates to user from new cached content.
If your database getting grow day to day you must add additional logic to not store millions of records into cache every 5 minutes. I advice not to store dynamic content into the cache. Anyway if you want to cache you can use Redis, Memcached etc. But keep in mind to clear and again store new content into cache.
If you only get daily rates in every 5 minutes and not storing anything into database then you can use logic I have told above without database section. Directly save records into cache and clear when new rates getted.
And also add index to database table to get data more faster.
I'm working on a app where I have some entities in the database that have a column representing the date until that particular entity is available for some actions. When it expires I need to change it's state, meaning updating a column representing it's state.
What I'm doing so far, whenever I ask the database for those entities to do something with them, I first check if they are not expired and if they are, I update them. I don't particularly like this approach, since that means I will have a bunch of records in the database that would be in the wrong state just because I haven't queried them. Another approach would be to have a periodic task that runs over those records and updates them as necessary. That I also don't like since again, I would have records in a inconsistent state and in this case, the first approach seems more reasonable.
Is there another way of doing this, am I missing something? I need to mention, I use spring-boot + hibernate for my application. The underlying db is Postgresql. Is there any technology specific trick I can use to obtain what I want?
in database there it no triger type expired. if you have somethind that expired and you should do somethig with that there is two solutions (you have wrote about then) : do some extra with expired before you use data , and some cron/task (it might be on db level or on server side).
I recomend you use cron approach. Here is explanation :
do something with expired before you get data :
updated before select
+: you update expired data before you need it , and here are questions - update only that you requested or all that expired... update all might be time consumed in case if from all records you need just 2 records and updated 2000 records that are not related you you working dataset.
-: long time to update all record ; if database is shared - access to db not only throth you application , logic related to expired is not executed(if you have this case); you need controll entry point where you should do something with expired and where you shouldn't ; if time expired in min , sec - then even after you execure logic for expired , in next sec new records might be expired too;also if you need update workflow logic for expired data handling you need keep it in one plase - in cron , in case with update before you do select you should update changed logic too.
CRON/TASK
-: you should spend time to configure it just once 30-60 mins max:) ;
+: it's executed in the background ; if your db is used not only by your application , expired data logic also be available; you don't have to check(and don't rememebr about it , and explaine about for new employee....) is there any staled data in your java code before select something; you do split logic between cares about staled data , and normal queries do db .
You can execute 'select for update' in cron and even if you do select during update time from server side query you will wait will staled data logic complets and you get in select up to date data
for spring :
spring scheduling documentation , simple example spring-quartz-schedule
for db level postgresql job scheduler
scheduler/cron it's best practices for such things
We have a large user base in our Oracle Identity Manager system. We have over 0.5 million records in USR table. We have our trusted reconciliation scheduled jobs running every 2 hours. While running trusted reconciliation scheduled jobs for LDAP and FlatFile, OIM is firing a search query on USR table everytime to list all active users. Due to large user base, this query takes a lot of time and our scheduled job which is supposed to bring less than 100 insers/updates takes around 1 hour to complete. Is there a way to optimize it? I have gone through the OIM optimizations guide and have done all the optimizations suggested by Oracle which includes putting USR table in default buffer pool. Any suggestions would be appreciated.
Thanks.
I have a table with a field 'loginStatus'. Now everytime a user logs in, the value is set to 1 and after clicking on logout the value is set to 0. Now when a user tries to log in, the value of that field is checked, If it is 0 then the user can log in, if it is 1 then the user cannot login. Now if by any mean the browser is closed, the user cannot login with that userID. Because the value of that field is still 1(he hasn't clicked the logout button so it is not changed). My application is running fine unless the user closes the browser.
I know this issue can be solved differently but I have been asked to do it this way. Now the problem is I am not that much pro in Java EE so multiple help with explanations are exactly what I am looking for.
Also I have a possible solution which is like : creating a database trigger to change the loginStatus value to 0 which will be triggered after, say 15 mins, as the user logs in. Now I also dont know how to create that kind of trigger that will trigger after specific time.
If you've had this requirement forced on you, you can automatically expire accounts without having to run any job.
Instead of a simple "on/off" flag, have a date/timestamp on the table which is set to the current date/time. Every now and then when the user hits the server with a request, you'd update this column to the current time.
If a second session tries to login, that session should check the date/timestamp on the table, and if it's more than 15 minutes ago, the login is allowed; otherwise it is blocked.
You could create a database job that runs periodically and expires old sessions. Depending on the version of Oracle you're using, you can either use the DBMS_JOB package or the more sophisticated DBMS_SCHEDULER package. DBMS_JOB is an older package but for relatively simple and isolated tasks like this, there is less of a learning curve. For example, if you have a stored procedure UNLOCK_ACCOUNTS that, when executed, determines which accounts to unlock and unlocks them, you can use DBMS_JOB to have that procedure run every 15 minutes
DECLARE
l_jobno INTEGER;
BEGIN
dbms_job.submit( l_jobno,
'BEGIN unlock_accounts; END;',
sysdate + interval '15' minute,
'sysdate + interval ''15'' minute' );
commit;
END;
Of course, you could also use a Java scheduler (Quartz is a popular one) or the DBMS_SCHEDULER package to do the same thing. This does require, however, that there is a field somewhere that stores the login timestamp so that the UNLOCK_ACCOUNTS procedure can figure out which logins happened more than 15 minutes ago.
Generally, however, this entire architecture is rather suspect. It's pretty odd that you'd want to have a web-based application (which is inherently stateless) deny logins because the user had opened another browser at some earlier point in time. It's relatively common to time out sessions if they have been inactive for a while as a security matter, but 15 minutes is generally way too short for that sort of thing-- even bank web sites generally allow you to be idle longer than that. And this approach doesn't even appear to prevent you from being logged in from multiple browsers/ computers at the same time so long as the logins happened to come more than 15 minutes apart.
i have a PL/SQL program which do a query to an AS400 database through Transparent Gateway. Sometimes the AS400 not responds to the query (may be network problems )and the PL/SQL program hangs.
Is there any method to set a timeout to the Oracle query so that when certain amount of time passes an exception is risen?
Have you tried setting the HS_FDS_CONNECT_PROPERTIES parameter in the AS400 Transparent Gateway initialisation file?
For a timeout of 2 minutes:
HS_FDS_CONNECT_PROPERTIES="timeout='120'"
Another more general option for setting a query timeout is to create a profile and assign it to the user running your query.
A resource profile can be used to set limits on all sorts of usage in any particular session - one resource limit available is connection time.
For example, you could create a profile as400_tg_profile and assign it a maximum connection time of 2 minutes:
create profile as400_tg_profile limit connect_time 2;
... then you could assign this profile to the user running the query:
alter user as400_tg_user profile as400_tg_profile;
There are lots of options on creating a profile and there are many ways to assign a profile to a particular user so you should read through the documentation.
You could also look into using Oracle Resource Manager creating resource groups and resource profiles if you need to dynamically assign particular resource limits - this gives you fine-grained control of resources for individual sessions.
The Oracle documentation is really good on this - for starters, give this a read:
http://www.oracle.com/technology/products/manageability/database/pdf/twp03/twp_oracle%20database%2010g%20resource%20manager.pdf
For more detail:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/dbrm.htm#ADMIN027
This is one of those bits of functionality that's easier to use in Enterprise Manager, but a quick PL/SQL example is given in:
http://www.dba-oracle.com/job_scheduling/resource_manager.htm