Is there a way in Vertica to check backlog of sessions? For example - to check what sessions were active 6-12-24 hours ago? 2-4-10 days ago?
You need to look into you DC retention policy.
You can start by looking at the policy definition using :
select get_data_collector_policy('SessionStarts');
Also you got this components responsible for sessions:
`dbadmin=> SELECT distinct component FROM data_collector where component ilike '%session%';
component
-------------------
SessionStarts
SessionParameters
SessionEnds
`
See more details on this page:
How to Manage and Work with Data Collector in Vertica
Related
I want to prevent users to login in Oracle BI12c with the same "username" more than once.
also, I checked many documents and see parameter like "Max Session Limit" but its not worked for my problem.
thank for your guidance to have any solution
Just as a wrap-up. OBIEE is an analytical platform and you have to think about connections in a different way. As cdb_dba said:
1.) take a step back
2.) think about what you want to do
3.) learn and comprehend how the tool works and does things
4.) decide on how you implement and control things by matching #2 and #3
You can configure this using Database Resource Manager, or by creating a customized profile for the group of users you want to limit sessions for.
Oracle's documentation on profiles can be found at the following link. You want to define the SESSIONS_PER_USER parameter as 1. https://docs.oracle.com/database/121/SQLRF/statements_6012.htm#SQLRF01310
Edit based on the additional Requirements:
After giving it some thought, I'm not sure if you could do something like this at the profile level, You'll probably have to do something like creating a trigger based on the v$session table. v$session has SCHEMANAME, OSUSER, and MACHINE. Since your users are sharing the same schema, you may be able to create a trigger that throws an error like "ERROR: Only one Connection per User/Machine" based on either the MACHINE or the OSUSER columns in the v$session table. This is less than ideal for a number of reasons, and your developers will probably hate you, but if you absolutely need to do something like this, it is possible.
I'm working on a app where I have some entities in the database that have a column representing the date until that particular entity is available for some actions. When it expires I need to change it's state, meaning updating a column representing it's state.
What I'm doing so far, whenever I ask the database for those entities to do something with them, I first check if they are not expired and if they are, I update them. I don't particularly like this approach, since that means I will have a bunch of records in the database that would be in the wrong state just because I haven't queried them. Another approach would be to have a periodic task that runs over those records and updates them as necessary. That I also don't like since again, I would have records in a inconsistent state and in this case, the first approach seems more reasonable.
Is there another way of doing this, am I missing something? I need to mention, I use spring-boot + hibernate for my application. The underlying db is Postgresql. Is there any technology specific trick I can use to obtain what I want?
in database there it no triger type expired. if you have somethind that expired and you should do somethig with that there is two solutions (you have wrote about then) : do some extra with expired before you use data , and some cron/task (it might be on db level or on server side).
I recomend you use cron approach. Here is explanation :
do something with expired before you get data :
updated before select
+: you update expired data before you need it , and here are questions - update only that you requested or all that expired... update all might be time consumed in case if from all records you need just 2 records and updated 2000 records that are not related you you working dataset.
-: long time to update all record ; if database is shared - access to db not only throth you application , logic related to expired is not executed(if you have this case); you need controll entry point where you should do something with expired and where you shouldn't ; if time expired in min , sec - then even after you execure logic for expired , in next sec new records might be expired too;also if you need update workflow logic for expired data handling you need keep it in one plase - in cron , in case with update before you do select you should update changed logic too.
CRON/TASK
-: you should spend time to configure it just once 30-60 mins max:) ;
+: it's executed in the background ; if your db is used not only by your application , expired data logic also be available; you don't have to check(and don't rememebr about it , and explaine about for new employee....) is there any staled data in your java code before select something; you do split logic between cares about staled data , and normal queries do db .
You can execute 'select for update' in cron and even if you do select during update time from server side query you will wait will staled data logic complets and you get in select up to date data
for spring :
spring scheduling documentation , simple example spring-quartz-schedule
for db level postgresql job scheduler
scheduler/cron it's best practices for such things
So as part of the reconfiguration of our oracle database I am trying to figure out what is the peak amount of processes from an instance of our database.
Whilst I am able to obtain the current processes/sessions which are running on the database I am unsure of if there is a function/view which contains this information in 11G.
I've done a search however nothing has really come up.
Thanks,
C
I hope I understand correctly that you want to find historic data on session activity in order to monitor performance. There are several solutions I can think of.
I'd suggest DBA_HIST_ACTIVE_SESS_HISTORY, V$ACTIVE_SESSION_HISTORY for historical data and V$SESSION for current data. (I guess you are using V$SESSION to get the current processes/sessions status).
If that is not sufficient you can always create your own procedure with scheduled execution and store snapshots of current status somewhere or use Zabbix trapper (or any other monitoring tool of you choice) to monitor DB activity.
If you have Oracle Enterprise Manager I'd recommend using that. You can use performance page there to see both current and historic data on sessions and also generate ASH reports.
Peak processes and sessions are available in the Active Workload Repository (AWR) in DBA_HIST_SYSMETRIC_SUMMARY.
--Maximum number of sessions.
select instance_number, maxval, begin_time
from dba_hist_sysmetric_summary
where metric_name = 'Session Count'
order by maxval desc, begin_time desc;
--Maximum number of processes.
--This is a little trickier because only the Limit is stored so it must be compared
--to the parameter value.
--This assumes that the PROCESSES parameter hasn't changed recently.
select instance_number, round(maxval/100 * gv$parameter.value) processes, begin_time
from dba_hist_sysmetric_summary
join gv$parameter
on dba_hist_sysmetric_summary.instance_number = gv$parameter.inst_id
where gv$parameter.name = 'processes'
and metric_name = 'Process Limit %'
order by processes desc, begin_time desc;
That's probably enough information to set the PROCESSES and SESSIONS parameters. If what you really need is to know how active the database was, then looking at
DBA_HIST_ACTIVE_SESS_HISTORY, like J91321 mentioned, is probably a better method.
We moved Algolia search from our local development environment to our staging environment. On staging we have 144,000 sample orders and 100,000 products. Both of these numbers are smaller than our production environment.
We inserted our app id and other credentials and saved. We're using AOE scheduler to execute our crons. algoliasearch_run_queue has been running for 5 hours now and it seems to be making the same queries:
SELECT SUM(order_items.qty_ordered) AS ordered_qty, order_items.name AS order_items_name, `o....
I believe this is related to ranking = ordered_qty. This cron is holding up all processing of subsequent crons, meaning other magento task, (order emails, indexing, etc) will not take place during the time this one is running.
What is the fix for this?
An improvement has been done in 1.4.3 but will probably not resolve the issue for such big store. Computing ordered_qty can indeed be long but it's used to have a good relevance.
opendeals and openrevenue are the two OOB fields on Account in CRM 2015 Online. Field level security is turned on by default for these fields.
However, I'm not able to edit Update and Create privileges on these fields in Field Level Security. The drop-down to assign the right privilege is deactivated?
Is this a known issue or is it supposed to be this way?
That's because they are rollup fields (MSDN ref), which are handled by the system itself.
Rollup attributes
Because rollup attributes persist in the database, they can be used for filtering or sorting just like regular attributes. Any kind of process or plug-in will use the most recently calculated value of the attribute. Rollup attribute values are calculated asynchronously by scheduled system jobs. Administrators set when a job is run or pause the job. By default, each attribute is updated hourly.
When a rollup attribute is created or updated a Mass Calculated Rollup Fields job is scheduled to run in 12 hours. The 12-hour delay is intended to perform this resource intensive operation during a time that will affect users the least. After the job completes, the next time it is scheduled to run will be 10 years in the future. If there is a problem with the calculation, this will be reported with the system job.
Locate the system job in Settings > System Jobs to find any errors with rollup fields.