How to improve sqlicifer performance? - sqlcipher

I have a very small encrypted sqlite test database. I run a very simple select: just one record from the table which contains one record. This request takes very significant time: 0.3 sec.
lesnik#westfall:~/Projects/ls$ cat sql_enc.sql
PRAGMA KEY = "DUMMYKEYDUMMYKEY";
SELECT * FROM 'version';
lesnik#westfall:~/Projects/ls$
lesnik#westfall:~/Projects/ls$ time sqlcipher rabbits_enc.sqlite3 < sql_enc.sql
key ver
---------- ----------
1 aaa
real 0m0.299s
user 0m0.297s
sys 0m0.000s
Experiments show that the time doesn't depend on number of requests in script and doesn't depend on size of database (this test database is just 5kb, result is the same on 500kb databases)
There is no such problem if database is not encrypted.
Performance is slightly better on another linux installation (in different Virtual Box on the same host). And there is no this problem on yet another linux installation (script execution time is about 0.001s there), so I believe this is some problem with environment. But I have no idea how to investigate this problem further. Any help is appreciated.

We provide general performance guidance for utilizing SQLCipher here

Related

Oracle 10g direct path write/read wait events

My 10g oracle prod database have performance problem. Some queries begun to return in 20 seconds which was comes in milliseconds. I get AWR report and top3 wait event shown below. I searched but i couldnt understand as well.
Can someone explain this events ? Thanks,
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
---------------------- ---------- ------- ------------ ----------------- ----------
direct path write temp 11,941,557 866,004 73 29.8 User I/O FEBRUARY
direct path write temp 16,197,445 957,129 59 17.2 User I/O MARCH
db file scattered read 5,826,190 58,095 10 2.0 User I/O FEBRUARY
db file scattered read 10,128,657 70,408 7 1.3 User I/O MARCH
direct path read temp 34,197,762 324,663 9 11.2 User I/O FEBRUARY
direct path read temp 88,688,686 507,715 6 9.1 User I/O MARCH
Two of your wait events are related to sorting: direct path write temp and direct path read temp. These indicate an increase in sorting on disk rather than in memory; disk I/O is always slower.
So, what has changed regarding memory allocation usage? Perhaps you need to revisit the values of SORT_AREA_SIZE or PGA_AGGREGATE_TARGET init parameters (depending on whether you are using Automatic PGA memory). Here is a query which calculates the memory/disk sort ratio:
SELECT 100 * (mem.value - dsk.value)/(mem.value) AS sort_ratio
FROM v$sysstat mem
cross join v$sysstat dsk
WHERE mem.name = 'sorts (memory)'
AND dsk.name ='sorts (disk)'
In an OLTP application we would expect this to be over 95%.
The other thing is, instead of looking at macro events you need to look at the specific queries which are running much slower. What has changed with regards to them? Lots more data? New indexes or dropped indexes? Refreshed statistics?
"SORT_RATIO ---------- 99.9985462"
So, sorts are higher but not too high. You need to focus on specific queries.
"in march we begun to user phyton application for some new queries. reason can be this ? "
Could be. Application change is always the prime suspect when our system exhibits different behavior.

access-2013 report opens very slow

I've a weird problem here with a report which I use every day.
I've moved from XP to WIN-7 some time ago and use access 2013.
(Language is german, so sorry I can only guess how the modes are called in english)
"Suddenly" (I really can't say when this started) opening the report in "report-view" takes VERY long. Around 1 minute, or so. Then, switching to "page-view" and formatting the report takes only 2 or 3 seconds. Switching back to report-view, again takes 1 minute.
The report has a complex Query as datasource. (In fact, a UNION of 8 sub-queries) Opening the this query displays the data after 1 second which is ok.
All tables are "linked" from the same ODBC Datasource, which points to a mysql server on our network.
Further testing I opened every table the queries use, one after another. I noticed that opening these tables takes around 9 seconds for every single table. It doesn't matter if it's a small or big table. Always these 9 seconds.
The ODBC datasource is defined using the IP address of the server, not the name. So I consider it not being a nameserver problem / timeout/ ...
What could cause this slowdown on opening tables ????
I'm puzzeled..
Here are a few steps I would try:
Taking a fresh copy of the Access app running on one of those "fast clients" and see if that solves the issue
try comparing performance with a fast client after setting the same default printer
check the version of the driver on both machines, and if you rely on a DSN, compare them

How many Queries per second is normal for this site?

I have a small site running Flynax classifieds software. I get 10/15 users concurrent users at the most. Sometimes I get very high load avg that results in outages and downtime problems on my server.
I run
root#host [~]# mysqladmin proc stat
and I see this :
Uptime: 111346 Threads: 2 Questions: 22577216 Slow queries: 5159 Opens: 395 Flush tables: 1 Open tables: 285 Queries per second avg: 202.766
Are 202.766 queries per second is normal for a small site like mine ?!
The hosting company is saying, my app is poorly coded and must be optimized.
The Flynax developers are saying, the server is poor and weak and must be replaced.
I'm not sure what to do? any help is much appreciated.
202.766 queries per second isn't normals for small website you described.
Probably some queries run in a loop and that is why you have such statistics.
As far as I know the latest flynax versions has mysql debug option, using this option
you can see how many queries run on the page and how much time each query executes.
Cheers

How can I speed up loading data in Oracle tables?

I have some very large tables (to me anyway), as in millions of rows. I am loading them from a legacy system and it is taking forever. Assuming hardware is ok that is fast. How can I speed this up? I have tried exporting from one system into CSV and used Sql loader - slow. I have also tried a direct link from one system to another so there is no middle csv file, just unload from one load into another.
One person said something about pre-staging tables and that somehow could make things faster. I don't know what that is or if it could help. I was hoping for input. Thank you.
Oracle 11g is what is being used.
update: my database is clustered so I don't know if I can do anything to speed things up.
What you can try:
disabling all constraints and only enabling them after the load process
CTAS (create table as select)
What you really should do: understand what is you bottleneck. Is it network, file I/O, checking constraints ... then fix that problem. For me looking at the explain plan is most of the time the first step.
As Jens Schauder suggested, if you can connect to your source legacy system via DB link, CTAS would be the best compromise between performance and simplicity, as long as you don't need any joins on the source side.
Otherwise, you should consider using SQL*Loader and tweaking some settings. Using direct path I was able to load 100M records (~10GB) in 12 minutes on a 6 year old ProLaint.
EDIT: I used the data format defined for the Datamation sort benchmark. The generator for it is available in the Apache Hadoop distribution. It generates records with fixed width fields with 99 bytes of data plus a newline character per line of file. The SQL*Loader control file I used for the numbers quoted above was:
OPTIONS (SILENT=FEEDBACK, DIRECT=TRUE, ROWS=1000)
LOAD DATA
INFILE 'rec100M.txt' "FIX 99"
INTO TABLE BENCH (
BENCH_KEY POSITION(1:10),
BENCH_REC_NBR POSITION(13:44),
BENCH_FILLER POSITION(47:98))
What is the configuration you are using?
Does the database where the data is imported have something like a standby database coupled to it? If so, it is very likely to have a configuration with force_logging enabled?
You can check this using
SELECT FORCE_logging from v$database;
It can also be enabled at tablespace level:
SELECT TABLESPACE_name,FORCE_logging from DBA_tablespaces
If your database is running ith force_logging, or your tablespace has force_logging, this will have impact on the import speed.
If this is not the case, check if archivelog mode is enabled.
SELECT LOG_mode from v$database;
If so, it could be that the archives are not written fast enough. In that case increase the size of the online redolog files.
If the database is not running archivelog mode, it still has to write to the redo files, if not using direct path inserts. In that case, check how quick the redo's can be written. Normally, 200GB/h is very well possible, when indexes are not playing a role.
Important is to find what link is causing the lack of performance. It could be the input, it could be the output. Here I focused on the output.

PSQL = fast, remote sql = v.slow

Okay, I appreciate that the question is a tad vague, but after a day of googling, I'm getting nowhere, any help will be appreciated, and I'm willing to try anything.
The issue is that we have a PostgreSQL db, that has arount 10-15 million rows in a particular table.
We're doing a select for all the columns, based on a DateTime field in the table. No joins, just a standard select with a where clause (time >= x AND time <= y). There is an index on the field as well...
When I perform the sql using psql on the local machine, it runs in around 15-20 seconds, and brings back .5 million rows, one of which is a text field holding a large amount of data per row (a program stack trace). When we use the same sql and run it through Npgsql, or pgadmin III on windows, it takes around 2minutes.
This is leading me to think that it's a network issue. I've checked on the machine when the query is running and it's not using a huge amount of memory or CPU, and the network speed is negligible.
I've gone through the recommendations on the Postgres Site for the memory settings as well. including updating shmmax and shmall.
It's Ubuntu 10.04, PSQL 8.4, 4GB RAM, 2.8GHz Quad Xeon (virtual but dedicated resources). the machine has it's windows counterpart (2008 R2, SS2008) on there as well, but turned off. The Query returns in around 10-15 seconds using SS with the same schema and data, I know this wouldn't be a direct comparison, but wanted to show that it wasn't a disk performance issue.
So the question is... any suggestions? Are there any network settings I should be changing? Anything that I've missed? I can't give too much information about the database, but here is a EXPLAIN ANALYZE that's obfuscated...
Index Scan using "IDX_column1" on "table1" (cost=0.00..45416.20 rows=475130 width=148) (actual time=0.025..170.812 rows=482266 loops=1)
Index Cond: (("column1" >= '2011-03-14 00:00:00'::timestamp without time zone) AND ("column1" <= '2011-03-14 23:59:59'::timestamp without time zone))
Total runtime: 196.898 ms
Try setting cursor_tuple_fraction to 1 in psql and see if it changes the results. If so, then it is likely that the optimiser is picking a better plan based on only getting the top 10% or so of results compared to getting the whole lot. Istr psql uses a cursor to fetch results piece by piece rather than the "firehose" executequery method.
If this is the case, it doesn't point directly to a solution, but you will need to tweak your planner settings, and at least if you can reproduce the behaviour in psql than it may be easier to see the differences and test changes.

Resources