Will not show time that has a leading zero hour time - char

From an AS400 data query, I am trying to convert a column into excel that is currently stored as 6 digit numerical into time shown as HH24:MI:SS. In Excel, it shows only times that do not lead with 0 (aka anything after 10:00:00). Here is what I currently have written:
char(time(timestamp_format(char(PS.B1AOTS), 'HH24MISS')), jis) AS "PFS Batch Time"
In AS400, it shows times without a leading zero (9:00:00) as opposed to 09:00:00. I think this might be the cause.

Related

Will Spring Batch prevent my program from grinding to a halt on 94 million transactions if the Garbage Collection is an issue?

This may look like a similar question to Performance optimization for processing of 115 million records for inserting into Oracle but I feel it's a different problem, and the other question does not have a definitive answer because of some lack of clarity.
I am loading a netCDF file consisting of the following variables and dimensions into three tables in a database to collect data from multiple data-sources
Variables:
Time: 365 entries in hours since Jan 1, 1900
Latitude: 360 entries, center of 1/2 degree latitude bands
Longitude: 720 entries, center of 1/2 degree longitude bands
Precipitation: 3 Dimensional Array Time, Lat, Lon in dimensions
The three tables I am constructing are like so:
UpdateLog:
uid year updateTime
Location:
lid lat lon
(hidden MtM table) UpdateLog_Location:
uid lid
Precipitation:
pid lid uid month day amount
If you do the math, the Location (and hidden table) will have around 250k entries each for this one file (it's just the year 2017) and the Precipitation table will have up to 94 million entries.
Right now, I am just using Spring Boot, trying to read in the data and update the tables starting with Location.
When I have a batch size of 1, the database started off updating fairly quickly, but over time bogged down. I didn't have any sort of profiling set up at the time, so I wasn't sure why.
When I set it to 500, I started noticing clearly the steps as it slowed down each update, but it started off much faster than the batch size of 1.
I set it to 250,000 and it updated the first 250,000 entries in about 3 minutes, when on a batch size of 1, 72 hours wouldn't even come close. However, I started profiling the program and I noticed something. This seems to be a problem not with the database (35-40 seconds is all it took to commit all those entries), but with Java, as it seems the Garbage Collection isn't keeping up with all the old POJOs.
Now, I have been looking at 2 possible solutions to this problem. Spring Batch, and just a direct CSV import to MariaDB. I'd prefer to do the former if possible to keep things unified if possible. However, I've noticed that Spring Batch also has me create POJOs for each of the items.
Will Spring Batch remedy this problem for me? Can I fix this with a thread manager and multi-threading the operation so I can have multiple GCs running at once? Or should I just do the direct CSV file import to MariaDB?
The problem is that even if I can get this one file done in a few days, we are building a database of historical weather of all types. There will be many more files to import, and I want to set up a workable framework we can use for each of them. There's even 116 more years of data for this one data source!
Edit: Adding some metrics from the run last night that support my belief that the problem is the garbage collection.
194880 nanoseconds spent acquiring 1 JDBC connections;
0 nanoseconds spent releasing 0 JDBC connections;
1165541217 nanoseconds spent preparing 518405 JDBC statements;
60891115221 nanoseconds spent executing 518403 JDBC statements;
2167044053 nanoseconds spent executing 2 JDBC batches;
0 nanoseconds spent performing 0 L2C puts;
0 nanoseconds spent performing 0 L2C hits;
0 nanoseconds spent performing 0 L2C misses;
6042527312343 nanoseconds spent executing 259203 flushes (flushing a total of 2301027603 entities and 4602055206 collections);
5673283917906 nanoseconds spent executing 259202 partial-flushes (flushing a total of 2300518401 entities and 2300518401 collections)
As you can see, it is spending 2 orders of magnitude longer flushing memory than actually doing the work.
4 tables? I would make 1 table with 4 columns, even if the original data were not that way:
dt DATETIME -- y/m/d:h
lat SMALLINT
lng SMALLINT
amount ...
PRIMARY KEY (dt, lat, lng)
And, I would probably do all the work directly in SQL.
LOAD DATA INFILE into whatever matches the file(s).
Run some SQL statements to convert to the schema above.
Add any desired secondary indexes to the above table.
(In one application, I converted hours into a MEDIUMINT, which is only 3 bytes. I needed that type of column in far more than 94M rows across several tables.)
At best, your lid would be a 3-byte MEDIUMINT with two 2-bytes SMALLINTs behind it. The added complexity probably outweighs a mere 94MB savings.
Total size: about 5GB. Not bad.
I've noticed that Spring Batch also has me create POJOs for each of the items.
Spring Batch does not force you to parse data and map it POJOs. You can use the PassThroughLineMapper and process items in their raw format (even in binary if you want).
I would recommend to use partitioning in your use case.
I'd like to thank those who assisted me as I have found several answers to my question and I will outline them here.
The problem stemmed from the fact that Hibernate ends up creating 1,000 garbage collection jobs per POJO and is not a very good system for batch processing. Any good remedy for large batches will avoid using Hibernate altogether.
The first method of doing so that I found utilizes Spring Boot without Hibernate. By creating my own bulk save method in my repository interface, I was able to bind it to a SQL insert query directly without needing a POJO or using hibernate to create the query. Here is an example of how to do that:
#Query(value = "insert ignore into location (latitude, longitude) values(:latitude, :longitude)",
nativeQuery = true)
public void bulkSave(#Param("latitude") float latitude, #Param("longitude") float longitude);
Doing this greatly reduced the garbage collection overhead allowing the process to run without slowing down at all over time. However, for my purposes, while an order of magnitude faster, this was still far too slow for my purposes, taking 3 days for 94 million lines.
Another method shown to me was to use Spring Batch to bulk send the queries, instead of sending them one at a time. Due to my unusual data-source, it was not a flat file, I had to handle the data and feed it into a ItemReader one entry at a time to make it appear that it was coming from a file directly. This also improved speed, but I found a much faster method before I tried this.
The fastest method I found was to write the tables I wanted out to a CSV file, then compress and then transmit the resulting file to the database where it could be decompressed and imported into the database directly. This can be done for the above table with the following SQL command:
LOAD DATA
INFILE `location.csv`IGNORE
INTO TABLE Location
COLUMNS TERMINATED BY `,`
OPTIONALLY ENCLOSED BY '\"'
LINES TERMINATED BY `\n`
(latitude, longitude)
SET id = NULL;
This process took 15 minutes to load the file in, 5 minutes to compress the 2.2 Gbs of files, 5 minutes to decompress the files, and 2-3 minutes to create the files. Transmission of the file will depend on your network capabilities. At 30 minutes plus network transfer time, this was by far the fastest method of importing the large amounts of data I needed into the database, though it may require more work on your part depending on your situation.
So there are the 3 possible solutions to this problem that I discovered. The first uses the same framework and allows easy understanding and implementation of the solution. The second uses an extension of the framework and allows for larger transfers in the same period. The final one is by far the fastest and is useful if the amount of data is egregious, but requires work on your part to build the software to do so.

Oracle DB: Convert String(Time stamp) into number(minutes)

So, I am trying to build a query in RMAN Catalogue ( using RC_RMAN_BACKUP_JOB_DETAILS) to compare the most recent backup duration (TIME_TAKEN_DISPLAY) for each database (DB_NAME) with its historical average AVG backup duration (TIME_TAKEN_DISPLAY).
How do I convert TIME_TAKEN_DISPLAY(timestamp; HH:MM:SS), i.e. in VARCHAR2 Format to a minute format, i.e number only, so as to run the query against the entire RC_RMAN_BACKUP_JOB_DETAILS to compare AVG time taken in past with time takes for last backup for each DB.
One thing that may work is converting String(Time_taken_display)->To_TIME(Time_taken_display in Time format)->TO_NUM(Time_taken_display in minutes in number format), but this will be so highly inefficient.
The solution can be pretty simple and complex depending on the requirements:
One simple solution is:
select avg(substr(TIME_TAKEN_DISPLAY, 0,2)*60 + substr(TIME_TAKEN_DISPLAY, 4,2) + substr(TIME_TAKEN_DISPLAY, 7,2)/60) from RC_RMAN_BACKUP_JOB_DETAILS;
Using Type Casting Functions:
Cast TIME_TAKEN_DISPLAY into time format using TO_TIMESTAMP and then cast to TO_NUMBER, but I did not want to take this approach as I plan to run my scripts against all databases logged in the view, and multiple casting will leave the performance highly inefficient.
But as per #alex Poole comment, I will be using ENLAPSED_SECONDS field as it is readily available in seconds and number data type.

Comparing millisecond timestamps in HDFS

I have 2 timestamp columns stored in an HDFS that I can access through Impala, hive, etc...
The timestamps that I need to compare may look like this example:
2014-04-08 00:23:21.687000000
2014-04-08 00:23:21.620000000
With differences in the milliseconds, and need to build a new column that in this example should have a value of 0.067000
I've tried using impala's built in time functions but none of them seem to make the cut.
I've tried:
casting the string to a timestamp and then substracting the 2 values. This returns an error "AnalysisException: Arithmetic operation requires numeric operands"
using the unix_timestamp function. This truncates the values to an int that represent seconds, so subsecond values are lost.
While writting this question I found the answer :)
The way to do it was using a double cast.
Cast(cast(time_stamp) as timestamp) as double)
this makes the times_stamp into a number without truncating sub-second values.
Once there it becomes a trivial arithmetic operation.

SQLCE performance in windows phone very poor

I've writing this thread as I've fought this problem for three whole days now!
Basically, I have a program that collects a big CSV-file and uses that as input to a local SQLCE-database.
For every row in this CSV-file (which represents some sort of object, lets call it "dog"), I need to know whether this dog already exists in the database.
If it already exists, don't add it to the database.
If it doesn't exists, add a new row in the database.
The problem is, every query takes around 60 milliseconds (in the beginning, when the database is empty) and it goes up to about 80ms when the database is around 1000 rows big.
When I have to go thru 1000 rows (which in my opinion is not much), this takes around 70000 ms = 1 minute and 10 seconds (just to check if the database is up to date), way too slow! Considering this amount will probably some day be more than 10000 rows, I cannot expect my user to wait for over 10 minutes before his DB is synchronized.
I've tried to use the compiled query instead, but that does not improve performance.
The field which im searching for is a string (which is the primary key), and it's indexed.
If it's necessary, I can update this thread with code so you can see what I do.
SQL CE on Windows Phone isn't the fastest of creatures but you can optimise it:
This article covers a number of things you can do:WP7 Local DB Best Practices
They als provide a WP7 project that can be downloaded so you can play with the code.
On top of this article I'd suggest changing your PK from a string to an int; strings take up more space than ints so your index will be larger and take more time to load from isolated storage. Certainly in SQL Server searchs of strings are slower than searches of ints/longs.

WebUtil's CLIENT_TEXT_IO.PUT_LINE when writing CSV files

We're migrating our Oracle forms and Oracle reports from 6i to 10g over Windows 7. But when we changed the new PC's with Windows 7, users reported several reports and some forms that generates CSV files, they were generating incomplete data or files in blank -no records, just headers-.
Looking around we find out that when we use BETWEEN CLAUSE like this:
SELECT id, name, lastname FROM employee WHERE date_start BETWEEN :P_INIT_DATE AND :P_FINAL DATE
The resulting file was in blank or with records with mismatched dates, so we deduced there were a problem between Windows 7 date understanding and the Oracle database or whatever, we don't know yet. We could solve all this doing a double conversion TO_DATE(TO_CHAR(:P_DATE)) but now, when we want to generare a CSV file with forms 10g using CLIENT_TEXT_IO.PUT_LINE, we're experimenting a strange behavior. Webutil starts writting a file, but when this reaches certain number of lines it overwrites the same file starting in the beginnig of the CSV file again. So when you open the file in excel you only see the X last lines.
I would really apprecieate any help to fix this problems. There is no specific question, I just explain the problem we have, looking for help
CLIENT_TEXT_IO caches records before writing them to your file. I've seen several different thresholds in the range you cite. If your Form code issues a SYNCHRONIZE; every so many records written, the cache will be flushed each SYNCHRONIZE. I'm not writing large files at the moment, but in the past 100 records per SYNCHRONIZE has worked well. Check your timings carefully; 100 may be too few records per SYNCHRONIZE. Since the number I've seen varies from shop to shop, I'd wager it's NOT related solely to number of records, but how many bytes you stuff into your cache.

Resources