SQL Server 2008 R2 Express 10GB Filesize limit - sql-server-2008r2-express

I have reached the file size limit on my SQL Server 2008 R2 Express database which I believe is 10Gb. I know this because I see Event ID 1101 in the event log.
Could not allocate a new page for database 'ExchangeBackup' because of insufficient disk space in filegroup 'PRIMARY'
I have removed some historic data to work around the problem for now but it is only a temporary fix. One table (PP4_MailBackup) is much larger than the others so when I created this database 12 months ago, I converted this table to be a Filestream table and the data is stored outside the FileGroup in the File System. This appeared to be working successfully until I received the error and new data was no longer being added to my database.
When I do a report on table sizes I see the Reserved(KB) column adds up to almost 10GB.
The folder that holds my FileStream data is 176 GB
The database .mdf file is indeed 10GB.
Does anyone have any idea why the table PP4_MailBackup is still using nearly 7GB?
Here is the "Standard Reports -> Disk Usage report" for this database:
Thanks in advance
David
Update
Here is some more info.
There are 868,520 rows in this table.
This cmd returns 1 so I'm assuming Ansipadding is on. I have never changed this from the default.
SELECT SESSIONPROPERTY('ANSI_PADDING')
The columns are defined like this
Even if every record for every column filled the full record size, by my rough calculation the table would be around 4,125,470,000 bytes. I understand that the nvarchar columns only use the actual space required.
I'm still missing a lot of space.

Not really an answer but more of a conclusion.
I have given up on this problem and resided myself to remove data to stay under the 10GB Primary file size limit. I figured out that the nvarchar columns store 2 bytes per character in order to deal with Unicode characters although they do only use the space required and don't pad out the column with spaces. So this would account for some of the space I can't find.
I tried to convert my char(500) columns to varchar(500) by adding new columns with the correct type copying data into them and then removing the old column. This worked but the table actually got bigger because removing the column is only a Meta data change and does not actually remove the data. To recover the space I would need to create a new table and copy the data across then remove the old table of course I don't have enough space in the primary file to do that.
I thought about copying the table to temp db removing the original table then copying it back but temp db doesn't support filestream columns (at least to my knowledge) so I would need to hold all 170GB within the temp db table. This sounded like a dubious solution and my test server didn't have enough space on the partition where temp db was stored. I couldn't find anything on the files size limit of tempdb on sql 2008 Express, but at this point it was all getting too hard.

Related

Oracle - clean LOB files - recovering disk space

I have a friend who has a website and asked me for help.
I often use MySQL databases but never Oracle databases.
And unfortunately he has an Oracle database, so I can't find a solution.
The available disk space is slowly decreasing... I delete a lot of lines from the table but that doesn't solve his problem.
The database continues to take up disk space slowly.
I read that LOB files do not return disk space, even if you delete data.
How can I reorganize LOB files easily with a simple request?
(or/and) How can I recover disk space on Oracle?
SELECT DISTINCT VERSION FROM PRODUCT_COMPONENT_VERSION
12.1.0.1.0
The BLOB column exists within the table blocks along with data even after deletion. It is only marked as unused. You can use the following command to free up space from the BLOB table:
ALTER TABLE <YOUR_TABLE_NAME> MODIFY
LOB <LOB_COLUMN_NAME>
( SHRINK SPACE );
Now, Table must have released some space and it is now available to be used within the tablespace.
Further, you can just alter the data file and reduce the size of the data file accordingly to free up space from Disk. (Note: Space allocated to the data file will not be automatically reduced. It must be done manually)
Cheers!!

Migration issue while using DMS . Incorrect junk data for empty columns

While migrating from MySQL to ORAcle using AWS DMS servcie, In the source side(MySQL DB instance), some huge column (mediumtext) values are empty for 75% of rows in a table. Whereas in the target (Oracle ), its migrated with some other value (Not Junk values) . For me it looks like the column values are copied incorrectly between rows.
Wherever there is empty values in the source side columns, it copied some other data. Around 75% of table data for some of the clob columns with empty values in source side, are incorrectly mapped with some other data in the oracle side. We used FULL LOB mode and 10000Kb as chunk size.
Some questions or requests -
1. Could you share the table DDL from source and target?
2. Are you sure there is no workload running on the target that could change values in the table outside the DMS process?
3. Full LOB mode migrates LOBs in chunks. Why are we specifying such a high LOB chunk size? Also, do we not know the max LOB size to use limited LOB mode.
4. Could you paste the task ARN here? I work for AWS DMS and can look to see what is going on? Once I find the root cause, I will also make sure I post an analysis here for all stackoverflow users.
Let me know.

Copying rows from remote database oracle

I am using Oracle XE 10.2. I am trying to copy 2,653,347 rows from a remote database with the statement
INSERT INTO AUTOSCOPIA
(field1,field2...field47)
SELECT * FROM AUTOS#REMOTE;
I am trying to copy all 47 columns for all 2 million rows locally. After running for a few minutes, however, I get the error:
ORA- 12952 : The request exceeds the maximum allowed database size of 4 GB data.
How can I avoid this error?
Details: i have 3 indexes in my local table (where i want to insert the remote information).
You're using the express edition of Oracle 10.2 which includes a number of limitations. The one that you're running into is that you are limited to 4 GB of space for your tables and your indexes.
How big is the table in GB? If the table has 2.6 million rows, if each row is more than ~1575 bytes, then what you want to do isn't possible. You'd have to either limit the amount of data you're copying over (not getting every row, not getting every column, or not getting all the data in some columns would be options) or you would need to install a version and edition that allows you to store that much data. The express edition of 11.2 allows you to store 11 GB of data and is free just like the express edition of 10.2 so that would be the easiest option. You can see how much space the table consumes in the remote database by querying the all_segments column in the remote database-- that should approximate the amount of space you'd need in your database
Note that this ignores the space used by out-of-line LOB segments as well as indexes
SELECT sum(bytes)/1024/1024/1024 size_in_gb
FROM all_segments#remote
WHERE owner = <<owner of table in remote database>>
AND segment_name = 'AUTOS'
If the table is less than 4 GB but the size of the table + indexes is greater than 4 GB, then you could copy the data locally but you would need to drop one or more of the indexes you've created on the local table before copying the data over. That, of course, may lead to performance issues but you would at least be able to get the data into your local system.
If you (or anyone else) has created any tables in this database, those tables count against your 4 GB database limit as well. Dropping them would free up some space that you could use for this table.
Assuming that you will not be modifying the data in this table once you copy it locally, you may want to use a PCTFREE of 0 when defining the table. That will minimize the amount of space reserved in each block for subsequent updates.

How to tune Oracle's SQL*Loader append?

I am writing a Java program that creates a CSV file with 6,800,000 records conforming to specific distribution parameters and populates a table using Oracle's SQL*Loader.
I am testing my program using different sizes of records (50,000 and 500.000). The CSV File generation by itself is quite fast, using concurrency it takes miliseconds to create and insert these records into a file.
Inserting said records, on the other hand, is taking too long. Reading the log file generated by SQL*Loader, it takes 00:00:32.90 seconds to populate the table with 50,000 records and 00:07:58.83 minutes to populate it with 500,000.
SQL*Loader benchmarks I've googled show much better perfomances, such as 2 million rows in less than 2 minutes. I've followed this tutorial to improve the time, but it barely changed at all. There's obviously something wrong here, but I don't know what.
Here's my control file:
OPTIONS (SILENT=ALL, DIRECT=TRUE, ERRORS=50, COLUMNARRAYROWS=50000, STREAMSIZE=500000)
UNRECOVERABLE LOAD DATA
APPEND
INTO TABLE MY_TABLE
FIELDS TERMINATED BY ","
TRAILING NULLCOLS
...
Another important info: I've tried using PARALLEL=TRUE, but I get the ORA-26002 error (Table MY_TABLE has index defined upon it). Unfortunatly, running with skip_index_maintenance renders the index UNUSABLE.
What am I doing wrong?
Update
I have noticed that soon after running the program (less than a second), all rows are already present in the database. Yet, SQL*Loader is still busy and only finishes after 32-45 seconds.
What could it be doing?
One thought would be to create an external table and set the name to the csv file. Then after creating the file you can run a sql script inside Oracle to process the data directly.
Or, look at the following (copied from here:)
This issue is caused when using the bulk load option in parallel to load an Oracle target that has an index on it. An Oracle limitation.
To resolve this issue do one of the following:
· Change the target load option to Normal.
· Disable the enable parallel mode option in relational connection browser.
· Drop the indexes before loading.
· Or create a pre- and post-session sql to drop and create indexes and key constraints

SSIS, what is causing the slow performance?

For Source: OLE DB Source - Sql Command
SELECT -- The destination table Id has IDENTITY(1,1) so I didn't take it here
[GsmUserId]
,[GsmOperatorId]
,[SenderHeader]
,[SenderNo]
,[SendDate]
,[ErrorCodeId]
,[OriginalMessageId]
,[OutgoingSmsId]
,24 AS [MigrateTypeId] --This is a static value
FROM [MyDb].[migrate].[MySource] WITH (NOLOCK)
To Destination: OLE DB Destination
Takes 5 or more minutes to insert 1.000.000 data. I even unchecked Check Constraints
Then, with the same SSIS configurations I wanted to test it with another table exactly the same as the Destination table. So, I re-create the destination table (with the same constrains except the inside data) and named as dbo.MyDestination.
But it takes about 30 seconds or less to complete the SAME data with the same amount of Data.
Why is it significantly faster with the test table and not the original table? Is it because the original table already has 107.000.000 data?
Check for indexes/triggers/constraints etc. on your destination table. These may slow things down considerably.
Check OLE DB connection manager's Packet Size, set it appropriately, you can follow this article to set it to right value.
If you are familiar with of SQL Server Profiler, then use it to get more insight especially what happens when you use re-created table to insert data against original table.

Resources