Select folder with current date in jBASE - jbase

Im stuck I have created a folder with current date (e.g test.20160419) and I want select that folder and copy some records in that folder through jBASE command.
Can anyone help me in this.

You need to use COPY (jBase Knowledgebase) command for what you are trying to achieve.
For your specific scenario, assuming that you want to copy all the records from SRC.TABLE table to test.20160419, the command should be:
COPY FROM SRC.TABLE TO test.20160419 ALL
If you need to copy some of the records in SRC.TABLE you can do a SELECT (jBase Knowledgebase) before like so
SELECT SRC.TABLE WITH #ID = "A]"
3 Records selected
>COPY FROM SRC.TABLE TO test.20160419 ALL
This will copy all the selected files (in this case starting with A).

Related

hive, ask for files within specific range

Suppose on HDFS I have file with following content: data1-2018-01-01.txt, data1-2018-01-02.txt, data1-2018-01-03.txt, data1-2018-01-04.txt, data1-2018-01-06.txt
Now I want to query files based on date:
select * from mytable where date > 2018-01-03 and date < 2018-01-06 ;
And my question: is it possible to create an external table just on these files satisfying my query? Or maybe you have any workaround?
I know, I could use partitions but they require to fetch the data manually when the new data set arrives.
Put those file into a directory and create new table on top of it.
Also Hive has INPUT__FILE__NAME virtual column, you can use it for filtering:
where INPUT__FILE__NAME like '%2018-01-03%'
Also it is possible to use substr or regexp_extract to get date from filename , then use IN or >, < to filter them.

external tables: how to make sure i don't load same file/data

I want to use an external table to load a csv file as it's very convenient, but the problem is how do i make sure i don't load the same file twice in a row? i can't validate the data loaded because it can be the same information as before; i need to find a way to make sure the user doesnt load the same file as 2h ago for example.
I thought about uploading the file with a different name each time and issuing an alter table command to change the name of the file in the definition of the external table, but it sounds kinda risky.
I also thought about marking each row in the file with a sequence to help differentiate files, but i doubt the client would accept it as they would need to manually do this (the file is exported from somewhere).
Is there any better way to make sure i don't load the same file in the external table except changing the file's name and executing an alter on the table?
Thank you
when you bring the data from external table to your database you can use MERGE command instead of insert. it let you don't worry about duplicate data
see the blog about The Oracle Merge Command
What's more, we can wrap up the whole transformation process into this
one Oracle MERGE command, referencing the external table and the table
function in the one command as the source for the MERGED Oracle data.
alter session enable parallel dml;
merge /*+ parallel(contract_dim,10) append */
into contract_dim d
using TABLE(trx.go(
CURSOR(select /*+ parallel(contracts_file,10) full (contracts_file) */ *
from contracts_file ))) f
on d.contract_id = f.contract_id
when matched then
update set desc = f.desc,
init_val_loc_curr = f.init_val_loc_curr,
init_val_adj_amt = f.init_val_adj_amt
when not matched then
insert values ( f.contract_id,
f.desc,
f.init_val_loc_curr,
f.init_val_adj_amt);
So there we have it - our complex ETL function all contained within a
single Oracle MERGE statement. No separate SQL*Loader phase, no
staging tables, and all piped through and loaded in parallel
I can only think of a solution somewhat like this:
Have a timestamp encoded in the datafile name (like: YYYYMMDDHHMISS-file.csv), where YYYYMMDDHHMISS is the timestamp.
Create a table with the fields timestamp (as above).
Create a shell scripts that:
extracts the timestamp from the datafilename.
calls an sqlscript with the timestamp as the parameter, and return 0 if that timestamp does not exist, <>0 if the timestamp already exist, and in that case exit the script with the error: File: YYYYMMDDHHMISS-file.csv already loaded.
copy the YYYYMMDDDHHMISS-file.csv to input-file.csv.
run the sql loader script that loads the input-file.csv file
when succes: run a second sql script with parameter timestamp that inserts the record in the database to indicate that the file is loaded and move the original file to a backup folder.
when failure: report the failure of the load script.

Oracle Apex - Read contents of an uploaded txt file

I am trying to read the contents of a text file that I uploaded using the page item file browser but I cannot figure out how to get the files contents.
I don't need it to be pushed into a table or anything, I just want a string represetation of it in a text area, for example. Or to be stored into a variable so I can process.
Apologies for the vagueness if any. I have tried a few ways but I am not sure, can I get the contents somehow using the WWV_FLOW_FILE?
The only solutions I have seen are using the Wizard region with the data mapping/verification breadcrumbs which is not what I need.
You can find the uploaded file as a BLOB inside the WWV_FLOW_FILE. If you just want to show it, into lets say a text field named P1_some_text_field, you can simple add a procedure like this on the page where you do the upload:
BEGIN
SELECT utl_raw.cast_to_varchar2(dbms_lob.substr(blob_content))
INTO :P1_some_text_field
FROM wwv_flow_files
WHERE created_on =
(SELECT MAX(created_on) FROM wwv_flow_files WHERE CREATED_BY = :APP_USER
);
END;
Please note that this will retrieve at most the fist 32767 characters of your file.
When I upload the file to WWV_FLOW_FILES, I then used a pl/sql statement to get the ID of the last file uploaded by the user and send the ID through another pl/sql statement which launches a powershell script on the server to download the file. The file is then processed on the server.

ORACLE moving user messages into appropriate directories depending on time of arrival

My simplified table structure looks like this:
MESSAGES(Id NUMBER,Content NVARCHAR2, Received_time TIMESTAMP)
DIRECTORIES(Id NUMBER, Name NVARCHAR2, PARENT NUMBER)
MESSAGES_USERS(Id NUMBER, MSG NUMBER (FK MESSAGES(Id)), USER NUMBER, DIRECTORY NUMBER (FK DIRECTORIES(Id)))
So, the task is to move messages into apropriate directories depending on received time. It is achieved by updating MASSAGES_USERS table and changing DIRECTORY id.
Prepared directory structure looks like this:
2012--
|
--- 04/2012
|
--- 05/2012
|
--- 06/2012
So that I`ll have to move messages received in april 2012 into directory named 04 which is a descendant of directory named 2012 in current users directory structure.
I search for the directory by name using
name LIKE TO_CHAR(M.Received_time, 'MM')||'/'||EXTRACT(YEAR FROM M.Received_time)
Is there any way to update directory for every message in one UPDATE statement without using a cursor? I've tried some corelated updates, but none of them (even with some huge subqueries) is a proper solution.
The statement should get appropriate directory id using message receival time (It requeries joining these three tables) and UPDATE DIRECTORY field in MESSAGES_USERS table with appropriate reveived id.
I have no idea how to force oracle to update every message with appropiate directory without specifing id of one message using cursor loop. Is that even possible?
Does this do it?
update messages_users mu
set directory =
( select d.id from directories d
join messages m on d.name = to_char(m.receved_time, 'MM/YYYY')
where m.id = mu.msg );

Save queries in VFP

How to save queries on disk.I use the TO clause(example:SELECT * FROM vendors TO w.qpr).Everything works,but when I run the query with DO i receive the following error:
http://s52.radikal.ru/i138/1201/2f/15765ffe2346.png
And what should I change in order to obtain the query like in query designer,I mean that the query should appear in browse window,but using the command mode.
Thank you in advance.
The TO clause is for storing the results of the query, not the query itself. (And, TO is a VFP extension; INTO is preferred.)
If you want to save the query, open up a PRG file (MODIFY COMMAND) and write the query there, then save it.
If you simply omit the TO or INTO clause, the query results will appear in a BROWSE window. Alternatively, use INTO CURSOR and give a cursor name, then issue BROWSE to browse the cursor.
Tamar
As in the other answer, use MODIFY command to make a .prg for your select code.
The INTO clause is for the result.
SELECT * FROM zip INTO CURSOR c_zip
Or
SELECT * FROM zip INTO TABLE c:\temp\test
If you want a XLS or CSV or something, select into a cursor then use
EXPORT TO c:\temp\zip.csv XL5
To save a query file
Do File, New, and select QUERY radio button.

Resources