Tokyo tyrant database size limit - key-value-store

I have a database with 65GB and it looks like it is not accepting new entries.
Is there any limit for tokyo tyrant database size?

Related

Will OBIEE RPD work if all the tables in physical layer are not present in database?

Here's a scenario:
Say I have a report which touches 10 tables in the physical layer. There are 100 tables in the physical layer of the RPD. Now I am changing the connection pool in the physical layer to point it to a different database/data-source. This new database has only those 10 tables in it (with the exact same structure as that in the previous database), which my report is using, but the rest of the 90 tables from the connection pool are not present.
Will my report work after this repointing to a different database/data-source? Or do I have to create all those 100 tables in the new database before trying to run my report?
I am just changing the connection pool in the RPD. Otherwise the everything in the RPD remains unchanged
Yes it will. You will start seeing errors as soon as you reference something which doesn't exist.

Data Injected to SQL DB too large in comparaison to Oracle Source

Hello I am using Azure Data Factory to inject data from Oracle to SQL DB, data are extracted in csv format. The problem is in the source I am reading like 10 Go of Data but when they're injected into Azure SQL DB the data size becomes 40 Go.
Is that normal ? and is there a way to lower the destination data size ?
Thank you
Try setting the table compression level. If the table has many number columns, SQL Server's ROW compression is similar to Oracle's default storage style for numbers and should produce a similar storage size to Oracle
PAGE compression provides additional compression strategies and may give you ~3X compression.
And if the table has many millions of rows, use a clustered cloumnstore table, and may provide ~10X compression.

Oracle Apex maximum record in a table

My idea is to create a website for storing big amount of information, and I would like to know how many records can we store in an Apex table?
Apex is a tool, it doesn't store anything. It is the database that does so.
If that database is Oracle, you most probably shouldn't worry as sky is the limit. OK, not really - disk space is.
For more info:
19c physical database limits
19c logical database limits
(as 19c is the current release)
You don't specify the version of the database. ALWAYS specify version when asking these types of questions.
You don't specify logical or physical maximum. Physical is limited by disk space. Per the docs, logical is unlimited.

How do I transfer huge amount of data(nearly 10TB) from Oracle DB to Snowflake in hours

How do I transfer huge amount of data(nearly 10TB) from Oracle DB to Snowflake in hours? I see some options like Hevo and fivetran which are paid. However, I need the data to be moved fast so that I need not keep the production system down.
The fastest way to get data into Snowflake is in 10MB to 100MB chunk files. Then, you can leverage a big warehouse to COPY INTO all of the chunk files at one time. I can't speak to how to get the data out of Oracle DB quickly to S3/Azure Blob, though, especially while the system is running its normal workload.
I recommend you look at this document from Snowflake for reference on the Snowflake side: https://docs.snowflake.net/manuals/user-guide/data-load-considerations-prepare.htm
Is there a network speed issue?
Anyways, the data should be compressed when transferred over the network.
There are three locations involved in the staging:
Oracle database,
the extraction client,
and the cloud storage.
You have two data transfers:
between database and client,
and between client and cloud storage.
If the Oracle version is 12cR2 or newer, the DB client can compress data when getting it out of the database. The data should then be compressed again and transferred to cloud storage at your Snowflake destination.
The final step is to load the data from cloud storage into Snowflake (within the same data center)...
Ideally you shouldn't need to keep the production database down. You should be able to categorise the data into
1 - historical data that will not change. You can extract this data at your own leisure, and should not require database to be down.
2 - static data that is fairly stable. You can also extract this data at your leisure
You only need to keep your database fairly stable (not down) when you are extracting the rest of the data. This will require you to build some way to track and validate all your datasets. There is no reason why you couldn't let users continue to read from the database, while you are performing the extract from Oracle.

Cost effective and timely system to do Business Logic queries (batch processing) on weekly snapshots of a 15 gb MySQL database?

We want to do batch operations on a series of MySQL snapshots, but are unsure of what architecture to purseu. We have over 100 snapshots of a 15 gb database.
We are currently debating what architecture to use that wouldn't be embarrassing slow, but not huge overhead (time-wise or monetarily). Here is what we have considered:
Loading each db into a Open stack VM doing our query then saving the data found off somewhere in a Redis Instance where we can use python to dump the reports. (This would be slow no?)
Loading everything into elastic search and perform queries via elastic search (This would not allow the robustness/logic of querying we want.)
We have looked at Hive or prestodb but that would require large amounts of hardware?
On a small scale some of our tables look like this:
client:
client_id (primary key)
name
address
city
device:
device_id (primary key)
name
location
service:
service_id (primary key)
name
price
client_id
device_id
(there are no relationships between tables)
Here is an example of what we want to do:
Look at all the services that have increased in price over the last month of snapshots (4 snapshots). Get the client's location of client and device name for each service that increased. Dump it to a csv or some other visualizing software like kibana.

Resources