how to print output in single line in oracle pl/sql - oracle

I have been using dbms_output.put_line to write the output in line, which is around 500000 lines with more than 500 characters.
I want the output in a single line and I had tried dbms_output.put . But it does not do so as dbms_output.put has a limitation of 32000 bits. Please suggest any solution.

DBMS_OUTPUT is a mechanism for displaying messages to standard output (i.e. a screen). The nature of PL/SQL programs is they tend to be batch oriented and frequently run as background jobs, so DBMS_OUTPUT has narrow applications. Writing out huge amounts of data is not one of them.
You haven't explained your use case, but the need to render millions of characters in a single stream of output, without carriage returns, suggests you want to write to a file. Oracle provides a built-in for this, UTL_FILE. We can iterate calls to UTL_FILE.PUT() to write as much data as we like to a single line in a file. Find out more.
UTL_FILE has one major constraint, which is it can only write to a file on the database server. Perhaps this is why you are trying to use DBMS_OUTPUT?
Something else? Oracle 12c supports Streaming in PLSQL via DBMS_XMLDOM. Check it out.
This is a pretty vague answer, because your question is pretty vague. If you edit your question and provide actual details regarding your issue I can make my response more concrete .

Related

dba_scheduler_job_run_details and dbms_output

dba_scheduler_job_run_details is capable to keep an dbms_output.put_line lines in dba_scheduler_job_run_details.output column. And it is so, when the job flows and exits normally. However, when I explicitly call dbms_scheduler.stop_job - all the output is missed.
How to avoid this behaviour?
I don't think you can avoid this, in the same way that if you killed off a SQL*PLus session that was running a procedure, you would not see the output either, because DBMS_OUTPUT simply puts data into an array and the calling environment is then responsible for retrieving that information and putting it somewhere (eg outputting to the terminal). You kill/stop the calling environment..and there's nothing left to go and get that output.
A better option would be to not utilise DBMS_OUTPUT solely as a debugging/instrumentation mechanism. Much better options exist in terms of capturing debugging information including viewing it from alternate sessions in real time etc.
Check out Logger at https://github.com/OraOpenSource/Logger. It is in common use by many in the Oracle community.

Converting text file with spaces between CR & LF

I've never seen this line ending before and I am trying to load the file into a database.
The lines all have a fixed width. After the CSV text which contains the data (the length varies line-by-line), there is a CR followed by multiple spaces and ending with LF. The spaces provide the padding to equalize the line width.
Line1,Data 1,Data 2,Data 3,4,50D20202020200A
Line2,Data 11,Data 21,Data 31,41,510D2020200A
Line3,Data12,Data22,Data 32,42,520D202020200A
I am about to handle this with a stream reader / writer in C#, but there are 40 files that come in each month and if there is a way to convert them all at once instead of one line at a time, I would rather do that.
Any thoughts?
Line-by-line processing of a stream doesn't have to be a bottleneck if you implement it at the right point in your overall process.
When I've had to do this kind of preprocessing I put a folder watch on the inbound folder, then automatically pick up each file and process it upon arrival, putting the original into an archive folder and writing the processed file into another location from which data will be parsed or loaded into the database. Unless you have unusual real-time requirements, you'll never notice this kind of overhead. If you do have real-time requirements, this issue will pale in comparison to all the other issues you'll face with batched data files :)
But you may not even have to go through a preprocessing step at all. You didn't indicate what database you will be using or how you plan to load the data, but many databases do include utilities to process fixed-length records. In the past, fixed-format files came with every imaginable kind of bizarre format (and contained all kinds of stuff that had to be stripped out or converted). As a result those utilities tend to be very efficient at this kind of task. In my experience they can easily be at least an order of magnitude faster than line-by-line processing, which can make a real difference on larger bulk loads.
If your database doesn't have good bulk import processing tools, there are a number of many open-source or freeware utilities already written that do pretty much exactly what you need. You can find them on GitHub and other places. For example, NPM replace is here and zzzprojects findandreplace is here.
For a quick and dirty approach that allows you to preview all the changes as you develop a more robust solution, many text editors have the ability to find and replace in multiple files. I've used that approach successfully in the past. For example, here's the window from NotePad++ that lets you use RegEx to remove or change whatever you like in all files matching defined criteria.

Fastest way to read and write data to file(s)?

In a VBScript application, I need to log a few (50ish) parameters over time. Since using a database for this would be overkill, I'll do this with flat files.
There is one thread writing data into the files, each second.
The user can draw a plot of any variable.
I wonder what would be the most efficient way to do things:
A single file:
single.txt
|Time|Param1|Param2|...|Param50|
|1|0.5|1.8|...|0.24|
One file per parameter:
param1.txt
|Time|Param|
|1|0.5|
param2.txt
|Time|Param|
|1|1.8|
For me, a single file would be easier to write, but more difficult to read and vice versa.
The file are meant to be no longer that 100k lines.
Is there a solution that is always better, 'theoretically', or is there a break-even point depending on the number of parameters?
Thanks a lot for your help,
Maxime
#AnsgarWiechers answer was definitively the right one.
Writing to an .csv file, and querying it with ADO works perfectly.

Does SQL*Loader have any functionality that allows for customizing the log file?

I have been asked to create a system for allowing third party companies to dump data into several of our tables. These third parties provide csv files on a periodic basis, and after doing some research it seemed like Oracle themselves had a standard tool for doing so, "sqlldr". I've since gotten it working to an acceptable degree, and we have a job scheduled to run that script once a day.
But one of the third parties supplies really dirty data, of the sort where I can't expect it to always load every row/record (looking like up to about 8% will fail). My boss asked me to forward "all output" from the first few tests to him, and like a moron I also sent the log file.
He has asked that this "report" be modified to include those exceptions that aren't unique constraints along with the line in the input file that caused the exception.
This means that I need data from the log file, but also from the (I believe) reject file in a single document. Rather than write a convoluted shell script to combine those two, does SQL*Loader itself allow any customization that might achieve the same thing? I've read through the Oracle documentation and haven't found anything that suggests this, but I've also learned not to trust it entirely either.
Is this possible? Ideally, the solution would allow me to add values to the reject file that don't exist in the original input file, but I'm also interested in any customization of the log file or reject file.
No.
I was going to stop there, but you can define the name of the log file, which might help with issue. Most automation with SQL*Loader involves wrapping it within shell scripts; aka "roll your own."

Oracle PL/SQL UTL_FILE.PUT buffering

I'm writing a large file > 7MB from an Oracle stored procedure and the requirements are to have no line termination characters (no carriage return/line feed) at the end of each record.
I've written a stored procedure using UTL_FILE.PUT and I'm following each call to UTL_FILE.PUT with a UTL_FILE.FFLUSH. This procedure errors with a write error once I get to the point where I've written more than the buffer size (set to max 32767) although I'm making the FFLUSH calls. The procedure works fine if I replace the PUT calls with PUT_LINE calls.
Is it not possible to write more than the buffer size without a newline character? If so, is there a work around?
Dustin,
The Oracle documentation here:
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#i1003404
States that:
FFLUSH physically writes pending data to the file identified by the file handle. Normally, data being written to a file is buffered. The FFLUSH procedure forces the buffered data to be written to the file. The data must be terminated with a newline character.
The last sentence being the most pertinent.
Could you not write the data using UTL_FILE.PUT_LINE before then searching the resulting file for the line terminators and removing them?
Just a thought....
deleted quote from docs, see Ollie's answer
Another possible way to do this is a Java stored procedure, where you can use the more full-featured Java API for creating and writing to files.
Although it is less than desirable, you could always PUT until you have detected that you are nearing the buffer size. When this occurs, you can FCLOSE the file handle (flushing the buffer) and re-open that same file with FOPEN using 'a' (append) as the mode. Again, this technique should generally be avoided, especially if other processes are also trying to access the file (for example: closing a file usually revokes any locks the process had placed upon it, freeing up any other processes that were trying to acquire a lock).
Thanks for all the great responses, they have been very helpful. The java stored procedure looked like the way to go, but since we don't have a lot of java expertise in-house, it would be frowned upon by management. But, I was able to find a way to do this from the stored procedure. I had to open the file in write byte mode 'WB'. Then, for each record I'm writing to the file, I convert it to the RAW datatype with UTL_RAW.CAST_TO_RAW. Then use UTL_FILE.PUT_RAW to write to the file followed by any necessary FFLUSH calls to flush the buffers. The receiving system has been able to read the files; so far so good.

Resources