I am having trouble clearing the Mondrian schema cache on my BI server. I go to
Tools->Refresh->Mondrian cache. But clear_mondrian_schema_cache.xaction does not seem to be clearing the cache.
I need the results to update as the source data changes, but I seem to keep getting cached results every time I issue a query.
Can someone help me with the API to enable periodic schema cache refresh?
i use the following bash script. which works nicely without any API faff :).
it is the equivalent of clicking tools > refresh > mondrian cache in the console.
the variable schema is the name of the schema which you see in the schema workbench app
#!bin/bash
#
# script to clean pentaho cache
user=XXX
pass=XXX
host=localhost
schema=Reporting%20schemas
wget --no-check-certificate http://${host}:2310/pentaho/content/analyzer/ajax/clearCache?catalog=${schema}\&userid=${user}\&password=${pass}
credit to pentaho support
Related
I have to delete user data in a database used by a laravel/eloquent application. It can be done with some SQL queries.
Problem: I have no clue of laravel/eloquent. That's where I need your help.
Is it safe to bypass laravel/eloquent and just change/delete data in the database using SQL queries (manually or with a script)?
With the assurance provided in the comments of my original question I deleted the data using some SQL statements, executed by the phpMyAdmin SQL feature.
Everything worked as expected.
Of course I studied the data model, backuped everything, and run tests on a copy of the database before performing the change on the production system.
I'm having an issue with a prompt in OBIEE 10g, such that it displays old database value due to the prompt query being serviced from cursor cache (presentation service). For example, if the prompt drop-down shows 1 value initially since there is 1 database row and when i delete this row from database, the prompt still shows the same database value unless i manually delete the cursor cache through analytics
Setting > Administration > Manage sessions > clear cache/cursors
Tried checking OBIEE presentation service config file instanceconfig.xml, however there is no such parameter to permanently disable this cache. I referenced the following link, OBIEE 10G/11G - Presentation Service (Query|Result|Cursor) Cache
Resetting these parameters didn't seem to have any impact on the cursor cache, these are still getting generated and are not cleared after the timeouts set. (I have restarted the OBIEE services after changing these parameters). Am I missing something here.
Would appreciate any pointers to get this done i.e. getting cursor cache cleared/disabled without manual intervention as mentioned above (through Settings > Administration).
At some point I also faced that issue. The presentation cache in OBIEE is a bit shady sometimes.
What I did is to add some dummy comparison on the query of the prompt, involving sysdate with enough precision so it makes each query different to the cache.
It's a bit shabby, but at least you don't need any manual intervention... Maybe it can help you.
Good luck!
You may see this issue if using a Presentation variable as well, rather than a Prompt built using a SQL query.
The problem may be due to shared Presentation Services Query Cache, which means
that even when the user logs out, the query cursor cache is still being shared by other users, so it does not refresh the new data after the user logs in again.
The cache file is in
ORACLE_INSTANCE/tmp/OracleBIPresentationServices/coreapplication_obipsn/obis_temp
See this document for more detail.
You can configure the Virtual Private Database option in the repository
physical database object and mark session variables as Security Sensitive in
the repository to make the query cache not shared among users. See this
documentation for more detail.
I am using an in-memory hsqldb database with a JDBC driver.
Now, I am looking for a way to persist this database for reloading after application reboot. I came up with the following options:
Export .script file with sql command "SCRIPT < path > " (link)
Log all statements to a log file.
Option 2 works, but it seems kind of ugly in my eyes. The script export for option 1 works too, but I seem to be unable to get the .script file back into an in-memory database.
I am thankful for any advice.
The first option is correct.
After you export the database with the SCRIPT <path> statement, you can get it into an in-memory database.
You need to connect to the scripted database with a read-only file: URL
For example if you export the database to d:/dbfiles/mydb.script, you will get the mydb.script file in the named directory. To connect to this database, use file:d:/dbfiles/mydb;files_readonly=true.
There is absolutely no speed difference between the above method and a mem: database.
H2 Database is not very stable (But very Fast wich is very good for DEV), especialy during the developpement process, i hope that the number of corruption is du to the immediat shutdown of the Server (during debuging).
How to ensure that a H2 DataBase is not corrupted?
In order garant that a backup is good.
Probably the best way to check if everything is OK is to create a SQL script from the database, using the SCRIPT statement. If that works, then the data is fully readable. The index data might still be corrupt, but indexes can be re-created.
Another option is to always backup the data in the form of a SQL script. This will make a separate check unnecessary; but backup is a bit slower and can't be done online (while updates are happening).
By the way: if a database file gets corrupt, it's due to misconfiguration or wrong usage (H2 supports disabling the transaction log), due to hardware failure, or due to a bug in the database engine itself.
My delayed job has something to do with exporting slightly edited version of most of the tables in the app's database, and while doing so, it is critical that none of the current data is being edited.
Is it possible to lock the entire database while running this delayed job?
More Information:
The database to be exported is in PostgreSQL, Heroku's postgresql database, to be more specific.
The flow is something like (all below should be done automatically by the code):
site would be put in maintenance mode,
freeze then export the database, then
when exporting is complete, re-activate the site back
Given there is not a lot of information with your question, I am going to answer you as best I can.
1) What is the database type and model? Is it a standalone DB like MS Access or Informix SE?
2) If not a standalone engine, does this database support replication. I used to work a lot with MS SQL Server, and replication had implications while the database was live and being edited. That is the implications were whether edited data was replicated. In this case, consult the docs. Is it an option to use replication to preserve the current database?
3) What kind of task is this? It sounds like maintenance. Our Informix SE databases lock when being imported or exported. On the production server, it is my job to make sure no local server applications are trying to access the locked DB, and that our external payments web site cannot interfere while the db is locked.
4) If this is a production site that is not in maintenance mode, then I suggest you probably do not want to lock an entire database.
I am sorry for not answering your question directly, but more information is needed like are you asking if this can be done from the Ruby DB interface on some model of db.