migrated db size is smaller than original - heroku

I followed the heroku tutorial to upgrade my db from dev to basic, using the backups plugin.
everything seems to work just fine, but there is something that bothers me - the size of the new db is smaller than the size of the original db!
when I use pg info info, I get the following information
# New DB (10.7 MB)
Plan: Basic
Status: available
Connections: 1
PG Version: 9.2.4
Created: 2013-10-07 17:33 UTC
Data Size: 10.7 MB
Tables: 28
Fork/Follow: Unsupported
# Old DB (14.9 MB)
Plan: Dev
Status: available
Connections: 1
PG Version: 9.2.4
Created: 2013-04-25 13:46 UTC
Data Size: 14.9 MB
Tables: 28
Rows: 9717/10000 (In compliance, close to row limit)
Fork/Follow: Unsupported
also, how can I get the rows count of the new db? (I want to validate it has the same number of rows)
thanks!

I have a similar issue upon upgrading from Ronin to Standard Tengu (9.2 -> 9.3)
I believe the following answers it:
PostgreSQL database size is less after backup/load on Heroku

Related

How to obtain database info and statistics using mgconsole?

When I use Memgraph Lab I can see the database statistics at the top of the window.
How can I obtain info such as Memgrph version, number of nodes, relationships, etc. when I'm using mgconsole?
To get the information on Memgraph version that is being used use the SHOW VERSION; query.
To get the information about the storage of the current instance use SHOW STORAGE INFO;. This query will give you the following info:
vertex_count - Number of vertices stored
edge_count - Number of edges stored
average_degree - Average number of relationships of a single node
memory_usage - Amount of RAM used reported by the OS (in bytes)
disk_usage - Amount of disk space used by the data directory (in bytes)
memory_allocated - Amount of bytes allocated by the instance
allocation_limit - Current allocation limit in bytes set for this instance

Can dumping and restoring database make it slower?

I have a Amazon RDS Postgres database. I created a snapshot of this database(say database-A) and then restored the snapshot on a new db instance(say database-B). database-A was a 8 GiB machine with 2 cores. database-B is a 3.75 GiB machine with 1 core.
I find the following:
Storage occupied by database-B is greater than database-A. I found the occupied storage using pg_database_size.
I find the queries slower on database-B than they were on database-A.
Are these two things possible in normal scenario or I must have made some mistake during dump/restore process?

MonetDB; !FATAL: BBPextend: trying to extend BAT pool beyond the limit (16384000)

Our monetdbd instance throws the error "!FATAL: BBPextend: trying to extend BAT pool beyond the limit (16384000)" after restarting from a normal shutdown (monetdbd start farm works, monetdb start database fails with the given error).
The database contains less than 10 tables and each table has min. 3 fields and max. 22 fields. The overall database size is about 16 GB and a table with 5 fields (3 ints, 1 bigint, 1 date) has 450mil. rows.
Has anyone an idea how to solve that problem without loosing the data?
monetdbd --version
MonetDB Database Server v1.7 (Jan2014-SP1)
Server details:
Ubuntu 13.10 (GNU/Linux 3.11.0-19-generic x86_64)
12 Core CPU (hexacore + ht): Intel(R) Core(TM) i7 CPU X 980 # 3.33GHz
24 GB Ram
2x 120 GB SSD, Software-Raid 1, LVM
Further details:
# wc BBP.dir: "240 10153 37679 BBP.dir"
It sounds strange. What OS and hardware platform?
Are you accidentally using a 32-bit Windows version?

how can i reduce the data fetch time with mongo in a bigger datasize

We have a collection(name_list) of 30 million 'names'. We are comparing this 30 million records with 4 million 'names'. We are fetching these 4 million 'names' from a txt file.
I am using PHP and Linux platform. I gave index for 'names' field. I am using simple 'find' to compare data with mongodb with txt file's data
$collection->findOne(array('names' => $name_from_txt))
I am comparing one by one. I Know join is not possible in mongodb.Is there any better method to compare data in mongodb?
The OS and other details are as follows.
OS : Ubuntu
Kernel Version : 3.5.0-23-generic
64 bit
MongoDB shell version: 2.4.5
CPU info - 24
Memory - 64G
Disks 3 - out of which mongo is written to a fusion i/o disk of size 320G
File system on mongo disk - ext4 with noatime as mentioned in mongo doc
ulimit settings for mongo changed to 65000
readahead is 32
numa is disabled with --interleave option
when i use a script to compare this, it takes around 5 min to complete ... what can be done, so that it gets executed faster and finish in say 1-2 min ? can anyone help please?

How to generate RowId automatically in Hbase MapReduce program

I need to load a dataset file into hbase table.I googled some examples and with that examples i tried reading a file and load it in Hbase. but only the first row is reading.Only one row of data is reading, i need to read all the data , i dont know where i went wrong
I have the file in this format
year class days mm
1964 9 20.5 8.8
1964 10 13.6 4.2
1964 11 11.8 4.7
1964 12 7.7 0.1
1965 1 7.3 0.8
1965 2 6.5 0.1
1965 3 10.8 1.4
1965 4 13.2 3.5
1965 5 16.1 7.0
1965 6 19.0 9.2
1965 7 18.7 10.7
1965 8 19.9 10.9
1965 9 16.6 8.2
please can any one correct me, where i went wrong, i need to load all the data contain in the file, but i can load only first row of data
https://github.com/imyousuf/smart-dao/tree/hbase/smart-hbase/hbase-auto-long-rowid-incrementor/ Did not test, but seems to be what you're looking for.
Also, look Hbase auto increment any column/row-key
Monolitically increasing row keys are not recommended in HBase, see
this for reference: http://hbase.apache.org/book/rowkey.design.html,
p.6.3.2. In fact, using globally ordered row keys would cause all
instances of your distributed application write to the same region,
which will become a bottleneck.
I guess it's because the rowkeys of your table are by default taking the value of the first column which is 'year' so hbase will only read it once since a rowkey cannot be duplicated.
Try to set your rowkey to a different column.

Resources