I started a new database named GREEN.db with one TABLE defined as followed:
CREATE TABLE articles(
"articleID" serial NOT NULL,
"articleTitle" character varying(21) NOT NULL,
"articleContent" text NOT NULL,
"articleAuthor" character varying(7) NOT NULL ,
"articleTime" timestamp without time zone DEFAULT now(),
CONSTRAINT articles_pkey PRIMARY KEY ("articleID")
)
And my code was written as followed:
db = web.database(dbn='postgres', db='green',user='YOng',password='xxx')
......
i = web.input()
t = time.localtime(time.time())
st = time.strftime("%Y-%m-%d %H:%M:%S", t)
datas = list(db.query("""SELECT * FROM articles ORDER BY "articleID" DESC"""))
n = db.insert("articles",
articleID=len(datas)+1, \
articleTitle=i.post_title, \
articleContent=i.post_content, \
articleAuthor="YOng", \
articleTime=st)
web.seeother('/')
The error threw out saying:
psycopg2.ProgrammingError: column "articleid" of relation "articles"
does not exist LINE 1 : INSERT INTO articles (articleTitle,
articleAuthor, articleID... ^
I don't know what happened to this code. Does Anyone have any suggestion? Any help appreciated~
Perhaps because of the uppercase letters?
the Error is :
column "articleid" of relation "articles" does not exist
your column name is "articleID"
Related
This program compiles correctly, we are on V7R3 - but when running it receives an SQLCOD of -101 and an SQLSTATE code is 54011 which states: Too many columns were specified for a table, view, or table function. This is a very small JSON that is being created so I do not think that is the issue.
The RPGLE code:
dcl-s OutFile sqltype(dbclob_file);
xfil_tofile = '/ServiceID-REFCODJ.json';
Clear OutFile;
OutFile_Name = %TrimR(XFil_ToFile);
OutFile_NL = %Len(%TrimR(OutFile_Name));
OutFile_FO = IFSFileCreate;
OutFile_FO = IFSFileOverWrite;
exec sql
With elm (erpRef) as (select json_object
('ServiceID' VALUE trim(s.ServiceID),
'ERPReferenceID' VALUE trim(i.RefCod) )
FROM PADIMH I
INNER JOIN PADGUIDS G ON G.REFCOD = I.REFCOD
INNER JOIN PADSERV S ON S.GUID = G.GUID
WHERE G.XMLTYPE = 'Service')
, arr (arrDta) as (values json_array (
select erpRef from elm format json))
, erpReferences (refs) as ( select json_object ('erpReferences' :
arrDta Format json) from arr)
, headerData (hdrData) as (select json_object(
'InstanceName' : trim(Cntry) )
from padxmlhdr
where cntry = 'US')
VALUES (
select json_object('header' : hdrData format json,
'erpReferenceData' value refs format json)
from headerData, erpReferences )
INTO :OutFile;
Any help with this would be very much appreciated, this is our first attempt at creating JSON for sending and have not experienced this issue before.
Thanks,
John
I am sorry for the delay in getting back to this issue. It has been corrected, the issue was with the "values" statement.
This is the correct code needed to make it work correctly:
Select json_object('header' : hdrData format json,
'erpReferenceData' value refs format json)
INTO :OutFile
From headerData, erpReferences )
I have a HIVE table which contains 3 columns- "id"(String), "booklist"(Array of String), and "date"(string) with the following data:
----------------------------------------------------
id | booklist | date
----------------------------------------------------
1 | ["Book1" , "Book2"] | 2017-11-27T01:00:00.000Z
2 | ["Book3" , "Book4"] | 2017-11-27T01:00:00.000Z
When trying to insert into Elasticsearch with this PIG script
-------------------------Script begins------------------------------------------------
SET hive.metastore.uris 'thrift://node:9000';
REGISTER hdfs://node:9001/library/elasticsearch-hadoop-5.0.0.jar;
DEFINE HCatLoader org.apache.hive.hcatalog.pig.HCatLoader();
DEFINE EsStore org.elasticsearch.hadoop.pig.EsStorage(
'es.nodes = elasticsearch.service.consul',
'es.port = 9200',
'es.write.operation = upsert',
'es.mapping.id = id',
'es.mapping.pig.tuple.use.field.names=true'
);
hivetable = LOAD 'default.reading' USING HCatLoader();
hivetable_flat = FOREACH hivetable
GENERATE
id AS id,
booklist as bookList,
date AS date;
STORE hivetable_flat INTO 'readings/reading' USING EsStore();
-------------------------Script Ends------------------------------------------------
When running above, i got an error saying:
ERROR 2999:Unexpected internal error. Found unrecoverable error [ip:port] returned Bad Request(400) - failed to parse [bookList]; Bailing out..
Can anyone shed any light on how to parse ARRAY of STRING into ES and get above to work?
Thank you!
I am facing the field in data file exceeds maximum length in sql loader error while loading the data to oracle.
//Below control file is used for sqlldr
// Control File: Product_Routing_26664.ctl
//Data File: phxcase1.pr
//Bad File: Product_Routing_26664.bad
LOAD DATA
APPEND INTO TABLE PEGASUSDB_SCHEMA.PRODUCT_ROUTING
FIELDS TERMINATED BY "^"
TRAILING NULLCOLS
(
OID_INST
,SEQ
,ROUTING_TYPE CHAR "(CASE WHEN trim(:ROUTING_TYPE) IS NULL THEN ' ' ELSE trim(:ROUTING_TYPE) END)"
,ODPD_KEY
,PROD_OFFSET
,EFF_DAYS_Z CHAR "(CASE WHEN trim(:EFF_DAYS_Z) IS NULL THEN ' ' ELSE trim(:EFF_DAYS_Z) END)"
,NETWORK_RTG_ID "substr(trim(:NETWORK_RTG_ID), 3, 26)"
,WT0
,WT1
,WT2
,WT3
,WT4
,WT5
,WT6
,WT7
,WT8
,WT9
,WT10
,WT11
,WT12
,WT13
,WT14
,WT15
,WT16
,WT17
,WT18
,WT19
,WT20
,WT21
,WT22
,WT23
,WT24
,WT25
,WT26
,WT27
,WT28
,WT29
,WT30
,WT31
,WT32
,WT33
,WT34
,WT35
,PCS0
,PCS1
,PCS2
,PCS3
,PCS4
,PCS5
,PCS6
,PCS7
,PCS8
,PCS9
,PCS10
,PCS11
,PCS12
,PCS13
,PCS14
,PCS15
,PCS16
,PCS17
,PCS18
,PCS19
,PCS20
,PCS21
,PCS22
,PCS23
,PCS24
,PCS25
,PCS26
,PCS27
,PCS28
,PCS29
,PCS30
,PCS31
,PCS32
,PCS33
,PCS34
,PCS35
,PR_TYPE CHAR "(CASE WHEN trim(:PR_TYPE) IS NULL THEN ' ' ELSE trim(:PR_TYPE) END)"
,PRODUCT_ROUTING_OID "PRODUCT_ROUTING_SQ.nextval"
,COMMON_CASE_OID CONSTANT "1"
,NETWORK_RTG_OID "(select NETWORK_RTG_OID from NETWORK_RTG where NETWORK_RTG_ID = substr(TRIM(:NETWORK_RTG_ID), 3, 26) and COMMON_CASE_OID = 1)"
)
Error: Record 2: Rejected - Error on table
PEGASUSDB_SCHEMA.PRODUCT_ROUTING, column OID_INST. Field in data file
exceeds maximum length
I have tried changing the OID_INST column to OID_INST CHAR(4000) but it shows the same error.
Please help me in resolving this.
I am working on Task 2 in this link:
https://sites.google.com/site/hadoopbigdataoverview/certification-practice-exam
I used the code below
a = load '/user/horton/flightdelays/flight_delays1.csv' using PigStorage(',');
dump a
a_top = limit a 5
a_top shows that the first 5 rows. There are non-null values for each Year
Then I type
a_clean = filter a BY NOT ($4=='NA');
aa = foreach a_clean generate a_clean.Year;
But that gives the error
ERROR 1200: null
What is wrong with this?
EDIT: I also tried
a = load '/user/horton/flightdelays/flight_delays1.csv' using PigStorage(',') AS (Year:chararray,Month:chararray,DayofMonth:chararray,DayOfWeek:chararray,DepTime:chararray,CRSDepTime:chararray,ArrTime:chararray,CRSArrTime:chararray,UniqueCarrier:chararray,FlightNum:chararray,TailNum:chararray,ActualElapsedTime:chararray,CRSElapsedTime:chararray,AirTime:chararray,ArrDelay:chararray,DepDelay:chararray,Origin:chararray,Dest:chararray,Distance:chararray,TaxiIn:chararray,TaxiOut:chararray,Cancelled:chararray,CancellationCode:chararray,Diverted:chararray,CarrierDelay:chararray,WeatherDelay:chararray,NASDelay:chararray,SecurityDelay:chararray,LateAircraftDelay:chararray);
and
aa = foreach a_clean generate a_clean.Year
but the error was
ERROR org.apache.pig.tools.pigstats.PigStats - ERROR 0: org.apache.pig.backend.executionengine.ExecException: ERROR 0: Scalar has more than one row in the output. 1st : (Year,Month,DayofMonth,DayOfWeek,DepTime,CRSDepTime,ArrTime,CRSArrTime,UniqueCarrier,FlightNum,TailNum,ActualElapsedTime,CRSElapsedTime,AirTime,ArrDelay,DepDelay,Origin,Dest,Distance,TaxiIn,TaxiOut,Cancelled,CancellationCode,Diverted,CarrierDelay,WeatherDelay,NASDelay,SecurityDelay,LateAircraftDelay), 2nd :(2008,1,3,4,2003,1955,2211,2225,WN,335,N712SW,128,150,116,-14,8,IAD,TPA,810,4,8,0,,0,NA,NA,NA,NA,NA)
Since you have not specified the schema in the LOAD statement,you will have to refer the columns using order in which they occur.Year seems to be the first column so try this
a_clean = filter a BY ($4 != 'NA');
aa = foreach a_clean generate a.Year;
I have the following code
SELECT SUM(nvl(book_value,
0))
INTO v_balance
FROM account_details
WHERE currency = 'UGX';
--Write the balance away
SELECT SUM(nvl(book_value,
0))
INTO v_balance
FROM account_details
WHERE currency = 'USD';
--Write the balance away
Now the problem is, there might not be data in the table for that specific currency, but there might be data for the 'USD' currency. So basically I want to select the sum into my variable and if there is no data I want my stored proc to continue and not throw the 01403 exception.
I don't want to put every select into statement in a BEGIN EXCEPTION END block either, so is there some way I can suppress the exception and just leave the v_balance variable in an undefined (NULL) state without the need for exception blocks?
select nvl(balance,0)
into v_balance
from
(
select sum(nvl(book_value,0)) as balance
from account_details
where currency = 'UGX'
);
SELECT L1.PKCODE L1CD, L1.NAME L1N, L1.LVL L1LVL,
L2.PKCODE L2CD, L2.NAME L2N, L2.LVL L2LVL,
L5.PKCODE L5CD, L5.NAME L5N,
INFOTBLM.OPBAL ( L5.PKCODE, :PSTDT, :PSTUC, :PENUC, :PSTVT, :PENVT ) OPBAL,
INFOTBLM.DEBIT ( L5.PKCODE, :PSTDT,:PENDT, :PSTUC, :PENUC, :PSTVT, :PENVT ) AMNTDR,
INFOTBLM.CREDIT ( L5.PKCODE, :PSTDT,:PENDT, :PSTUC, :PENUC, :PSTVT, :PENVT ) AMNTCR
FROM FSLVL L1, FSLVL L2, FSMAST L5
WHERE L2.FKCODE = L1.PKCODE
AND L5.FKCODE = L2.PKCODE
AND L5.PKCODE Between :PSTCD AND NVL(:PENCD,:PSTCD)
GROUP BY L1.PKCODE , L1.NAME , L1.LVL ,
L2.PKCODE , L2.NAME , L2.LVL ,
L5.PKCODE , L5.NAME
ORDER BY L1.PKCODE, L2.PKCODE, L5.PKCODE