How to insert record in KDB (KX Database) - insert

I have created one table named as test in KDB using following statement
test:([doc_id:`int$()];doc_displayid:`symbol$();doc_created_date:`date$();doc_aacess_date:`timestamp$();is_native_exist:`boolean$();file_size:`real$())
Now I want to insert a record in this
I've tried many ways like
insert['test; (1;`D_30;.z.d;.z.P;T;8.5)]
insert['test ([];`D_30;2018.8.8;2018.8.9T12:00:00.123;T;8.5)]
insert['test (1;`D_30;.z.d;2018.8.9T12:00:00.123;T;8.5)]
insert['test (1;`D_30;.z.d;2018.8.9T12:00:00.123;T;8.5)]
'test insert (1;`D_30;2018.8.8;2018.7.8T12:00:00.123;T;8.5)
But it doesn't work.
So please help me to solve this problem.
Thanks in advance.

Check the type of your input variables before insert in your test table.
Basically:
your 1 is of type long, not int;
2018.07.08T12:00:00.123 is of type datetime, not timestamp;
T does not exist, for boolean you should write 1b for true;
8.5 is of type float, not real.
When converting the above to the appropriate format, the insert works provided that you use backtick `test not 'test:
`test insert (1i;`D_30;2018.08.08;"p"$2018.07.08T12:00:00.123;1b;8.5e)
doc_id| doc_displayid doc_created_date doc_aacess_date is_native_exist file_size
------| --------------------------------------------------------------------------------------
1 | D_30 2018.08.08 2018.07.08D12:00:00.123000000 1 8.5

There are multiple issues with your insert statements, please check out the KDB datatypes wiki for examples
symbol should be defined with ` (uptick) i.e. `test , should not be 'test
doc_id is defined as int, so you need to pass an explicit i .e.g. 1i, 2i
There is no T boolean variable defined.
Real should be defined with an explicit e at the end. (8.5e)
timestamp is defined as dateDtimespan (D)
You can use either insert or upsert. upsert allows you to overwrite the record for already inserted record, however, the insert will ensure that you always enter a unique key, otherwise, it will throw an error.
upsert[`test; (1i;`D_30;.z.d;.z.P;0b;8.5e)]
insert[`test ;(2i;`D_30;2018.08.08;2018.08.09D12:00:00.123123123;0b;8.5e)]
doc_id| doc_displayid doc_created_date doc_aacess_date is_native_exist file_size
------| --------------------------------------------------------------------------------------
1 | D_30 2018.08.21 2018.08.21D07:33:40.630975000 0 8.5
2 | D_30 2018.08.08 2018.08.09D12:00:00.123123123 0 8.5

Related

how to use F expression to first cast a string to int, then add 1 to it then cast to string and update

I have a DB column which is generic type for some stats(qualitative and quantitative info).
Some values are string - type A and some values are numbers stored as string - type B.
What i want to do is cast the B types to number then add one to them and cast back to string and store.
Metadata.objects.filter(key='EVENT', type='COUNT').update(value=CAST(F(CAST('value', IntegerField()) + 1), CharField())
What i want to do is avoid race conditions using F expression and
update in DB.
https://docs.djangoproject.com/en/4.0/ref/models/expressions/#avoiding-race-conditions-using-f
It says in below post that casting and updating in db is possible for mysql
Mysql Type Casting in Update Query
I also know we can do arithmetic very easily on F expressions as it supports it and we can override functionality of add as well. How to do arthmetic on Django 'F' types?
How can i achieve Cast -> update -> cast -> store in Django queryset?
Try using annotation as follows:
Metadata.objects
.filter(key='EVENT', type='COUNT')
.annotate(int_value=CAST('value', IntegerField()))
.update(value=CAST(F('int_value') + 1, CharField())
Or maybe switching F and CAST works?
Metadata.objects
.filter(key='EVENT', type='COUNT')
.update(value=CAST( # cast the whole expression below
CAST( # cast a value
F('value'), # of field "value"
IntegerField() # to integer
) + 1, # then add 1
CharField() # to char.
)
I've added indentation, it helps sometimes to find the errors.
Also, doc says, CAST accepts field name, not an F-object. Maybe it works without F-object at all?
UPD: switched back to first example, it actually works :)
I believe the answer from #som-1 was informative but not substantiated with info or debugged data. I believe assuming is not always right.
I debugged the mysql queries formed in these two cases -
1 - Metadata.objects.update(value=Cast(Cast(F('value'), output_field=IntegerField()) + 1, output_field=CharField()))
2 - Metadata.objects.update(value=Cast(Cast('value', IntegerField()) + 1, CharField())) and
both give the same output as expected.
UPDATE Metadata SET value = CAST((CAST(value AS signed integer) + 1) AS char) WHERE ( key = 'EVENT' AND type = 'COUNT' )
Please find the link to add mysqld options to my.cnf and debug your queries. Location of my.cnf file on macOS
enabling queries - https://tableplus.com/blog/2018/10/how-to-show-queries-log-in-mysql.html

Inferring type of values returned by the SQLite3 shell tool

I am out of necessity using the SQLite3 shell tool to maintain a small database. I'm using -header -ascii flags, although this applies—as far as I can tell—to any of the output choices. I'm looking a way to avoid ambiguity over the type of any one value returned. Consider the following:
Create Table `things` (`number` Integer, `string` Text, `binary` Blob);
Insert Into `things` (`number`,`string`,`binary`) Values (4,'4',X'34');
Select * From `things`;
This returns (using caret notation):
number^_string^_binary^^4^_4^_4^^
As is evident, there is no way to infer the type of any of the '4' characters from the response alone as none of them have distinguishing delimiters.
Is there any way to coerce the inclusion of type metadata into the response?
I'd like to avoid:
Altering query statements to also include types as that would be obfuscatory and would be superfluous in the event I did switch interfaces;
Prefixing TEXT and BLOB values prior to insert as this would have to be uniform for all TEXT and BLOB interaction (in saying that, this is still my preferred choice should it come to that).
What I'm looking for is a switch of some kind that indicates type as part of SQLite's response, e.g.:
number^_string^_binary^^4^_'4'^_X'4'^^
number^_string^_binary^^4^_text:4'^blob:4^^
Or some variation thereof. Fundamental to this is the response alone contains enough information to discern the type and value of each element of that response (much in the same way sqlite3_column_type() allows in the SQLite Library API).
Update: I've refined this question since the first answer by #mike-sherrill-cat-recall to clarify expectations.
In SQLite, it doesn't always make sense to echo the data type of a column. SQLite doesn't have column-wise data types in the traditional sense. You can use typeof(X) in SQL to show the "datatype of the expression X".
sqlite> create table test (n integer, d decimal(8, 2));
sqlite> insert into test (n, d) values (8, 3.14);
sqlite> insert into test (n, d) values ('wibble', 'wibble');
Inserting text into an integer column succeeds.
sqlite> select n, typeof(n), d, typeof(d) from test;
n typeof(n) d typeof(d)
---------- ---------- ---------- ----------
8 integer 3.14 real
wibble text wibble text
You can concatenate anything you like--even producing caret notation--but it's kind of clumsy.
sqlite> select '(' || typeof(n) || ')^_' || n as caret_n from test;
caret_n
-------------------------
(integer)^_8
(text)^_wibble
SQLite Core Functions
The shell always converts printed values to strings. (That's what "print" means.)
If you don't want to add separate output columns for the types, you could use the quote function to output all values according to SQL syntax rules:
sqlite> with v(x) as (values (null), (1), (2.3), ('hello'), (x'00')) select quote(x) from v;
NULL
1
2.3
'hello'
X'00'

Mismtach Error in Foxpro SQL insertion

I need someone could help me out on how to trace the error of "mismatched data type" in visual foxpro 6.0 When I issues a command like this "insert into tmpcur from memvar".
tmpcur is a cursor having bulk numbers of columns and it is ready hard to trace which one is having mismatch in data type for insertion problem.
It is pretty difficult to trace the insertion loop of each record into VFP tables one by one unliked MSSQL profiler.
Appreciate to someone could help. Thanks.
This should help you. I have a temp cursor created with some bogus field / column names testing for types of character, integer, double, currency, date and time. Trying to follow what is the result of your scenario, I am taking the memory variable of "bbbb" which should be double (or numeric at the least), and changed it to a string.
I am then HOLDING the error trapping routine that MAY be in effect, then setting my own (as I don't think try/catch existed in VFP6.. it may, but I just don't remember. So, I did an ON ERROR, set a variable to true. Then, I default it to false, try the insert, then check the flag. If the flag IS set, then I go into a loop and try for each column in the given table/alias (in my example it is "C_Tmp", so replace with your table/alias). It goes through each variable, and if the data type is different from the table structure, it will dump the column name and table / memory value for you to review.
You could put this to a log file or something.
Now, another consideration. Some types are completely valid and common for implied conversion, such as character and memo fields can both get strings. Integer, double, float, currency can all work with generic "numeric" values.
So, if you encounter these differences, then we can go one level further and look for comparable types, but let me know and we can adjust as needed.
At least this should give you a huge jump to your insert issue.
CREATE CURSOR C_tmp ( cccc c(10), iiii i, bbbb b(2), ccyyyy y, ddd d, tttt t )
SCATTER MEMVAR memo
m.bbbb = "wrong data type, was double with 2 decimal"
lcHoldError = ON("ERROR")
ON ERROR lFailInsert = .t.
lFailInsert = .f.
INSERT INTO C_Tmp FROM memvar
IF lFailInsert
FOR lnI = 1 TO FCOUNT( "C_Tmp" )
lcTmp = FIELD( lnI, "C_Tmp" )
IF NOT TYPE( "C_Tmp." + lcTmp ) == TYPE( "m.&lcTmp" )
? "Invalid " + lcTmp + ", C_Tmp.&lcTmp, m.&lcTmp
ENDIF
ENDFOR
ENDIF
ON ERROR &lcHoldError

Replace Illegal XML characters with value from table Oracle PL\SQL

I have a requirement to check all text fields in a Database schema for any Illegal XML characters and replace them with a predefined set of acceptable values. This is to form part of a Data transformation rule, than can be called from other functions. So this function could be asked to called over a billion times on our dataset, so I need it to operate really efficiently.
i.e. & = AND ,
' = APOS
An example of what needs to be achieved by the function should be:
Update sometable set somefield = functioncall('f&re'd');
should result in
somefield having the value of ' fANDreAPOSd'
This is to carried out by a generic type PL/SQL function that takes an input of a text field and iterate through that field and replace all illegal values.
I have had a look at http://asktom.oracle.com/pls/asktom/f?p=100:11:0::NO::P11_QUESTION_ID:2612348048
http://decipherinfosys.wordpress.com/2007/11/27/removing-un-wanted-text-from-strings-in-oracle/
For some ideas, but I have my concerns over efficiency and the flexibility of these soltuions.
The way the client wants to handle the solution is to have a table configured to contain an illegal character and it's prefered replacement. The function then uses the values selected from this table to preform the replacements.
well, not exactly what you want, but consider this:
create type xmltest is object (s clob);
select XMLTYPE.createXml(xmltest('a& and ''')) from dual;
XMLTYPE.CREATEXML(XMLTEST('A&'''))
-------------------------------------------------------------------------------------
<XMLTEST>
<S>a& &apos;</S>
</XMLTEST>
However the list of predefined of XML entities is quite small, so there wouldn't be an issue replacing them with replace
If this is because of XML I'll support hal9000 - you should let Oracle do that for you. E.g. XML functions do it automatically:
SQLPLUS> set define off
SQLPLUS> select xmlelement("e", 'foo & bar <> bar "hemingway''s"') from dual;
XMLELEMENT("E",'FOO&BAR<>BAR"HEMINGWAY''S"')
--------------------------------------------------------------------------------
<e>foo & bar <> bar "hemingway&apos;s"</e>

How to remove duplicated records\observations WITHOUT sorting in SAS?

I wonder if there is a way to unduplicate records WITHOUT sorting?Sometimes, I want to keep original order and just want to remove duplicated records.
Is it possible?
BTW, below are what I know regarding unduplicating records, which does sorting in the end..
1.
proc sql;
create table yourdata_nodupe as
select distinct *
From abc;
quit;
2.
proc sort data=YOURDATA nodupkey;
by var1 var2 var3 var4 var5;
run;
You could use a hash object to keep track of which values have been seen as you pass through the data set. Only output when you encounter a key that hasn't been observed yet. This outputs in the order the data was observed in the input data set.
Here is an example using the input data set "sashelp.cars". The original data was in alphabetical order by Make so you can see that the output data set "nodupes" maintains that same order.
data nodupes (drop=rc);;
length Make $13.;
declare hash found_keys();
found_keys.definekey('Make');
found_keys.definedone();
do while (not done);
set sashelp.cars end=done;
rc=found_keys.check();
if rc^=0 then do;
rc=found_keys.add();
output;
end;
end;
stop;
run;
proc print data=nodupes;run;
/* Give each record in the original dataset and row number */
data with_id ;
set mydata ;
_id = _n_ ;
run ;
/* Remove dupes */
proc sort data=with_id nodupkey ;
by var1 var2 var3 ;
run ;
/* Sort back into original order */
proc sort data=with_id ;
by _id ;
run ;
I think the short answer is no, there isn't, at least not a way that wouldn't have a much bigger performance hit than a method based on sorting.
There may be specific cases where this is possible (a dataset where all variables are indexed? A relatively small dataset that you could reasonably load into memory and work with there?) but this wouldn't help you with a general method.
Something along the lines of Chris J's solution is probably the best way to get the outcome you're after, but that's not an answer to your actual question.
Depending on the number of variables in your data set, the following might be practical:
data abc_nodup;
set abc;
retain _var1 _var2 _var3 _var4;
if _n_ eq 1 then output;
else do;
if (var1 eq _var1) and (var2 eq _var2) and
(var3 eq _var3) and (var4 eq _var4)
then delete;
else output;
end;
_var1 = var1;
_var2 = var2;
_var3 = var3;
_var4 = var4;
drop _var:;
run;
Please refer to Usage Note 37581: How can I eliminate duplicate observations from a large data set without sorting, http://support.sas.com/kb/37/581.html . Usage Note 37581 shows how PROC SUMMARY can be used to more efficiently remove duplicates without the use of sorting.
The two examples given in the original post are not identical.
distinct in proc sql only removes lines which are fully identical
nodupkey in proc sort removes any line where key variables are identical (even if other variables are not identical). You need the option noduprecs to remove fully identical lines.
If you are only looking for records having common key variables, another solution I could think of would be to create a dataset with only the key variable(s) and find out which one are duplicates and then apply a format on the original data to flag duplicate records. If more than one key variable is present in the dataset, one would need to create a new variable containing the concatenation of all the key variable values - converted to character if needed.
This is the fastest way I can think of. It requires no sorting.
data output_data_name;
set input_data_name (
sortedby = person_id stay
keep =
person_id
stay
... more variables ...);
by person_id stay;
if first.stay > 0 then output;
run;
data output;
set yourdata;
by var notsorted;
if first.var then output;
run;
This will not sort the data but will remove duplicates within each group.

Resources