its my first question in this community. I am running the following short script in gnuplot (5.2),
set table "testable.txt"
plot '+' using 1:($1**2):($1**3)
unset table
The resulting data file contains only two columns; first a series of numbers from -10 to 10, and second their squares (as expected), but the third column (which should be the cubes of entries in first column) is missing. How can I get that third column in my data file?
Use with table. Check help plot with table.
set table "testable.txt"
plot '+' using 1:($1**2):($1**3) with table
unset table
Related
According to the H2 documentation for CSVREAD
If the column names are specified (a list of column names separated with the fieldSeparator), those are used, otherwise (or if they are set to NULL) the first line of the file is interpreted as the column names.
I'd expect reading the csv file
id,name,label,origin,destination,length
81,foobar,,19,11,27.4
like this
insert into route select * from csvread ('routes.csv',null,'charset=UTF-8')
would work. However, actually a JdbcSQLIntegrityConstraintViolationException is thrown, saying NULL not allowed for column "ORIGIN" and indicating error code 23502.
If I explicitly add the column names to the insert statement like so,
insert into route (id,name,label,origin,destination,length) select * from csvread ('routes.csv',null,'charset=UTF-8')
it works fine. However, I'd prefer not to repeat myself - following the DRY principle :)
Using version 2.1.212.
The CSVREAD function produces a virtual table. Its column names can be specified in parameters or in the CSV file.
INSERT command with a query doesn't map column names from this query with column names of target table, it uses their ordinal positions instead. Value from the first column of the query is inserted into first column specified in insert column list or into first column of target table if insert column list isn't specified, the second is inserted into second column, and so on.
You can omit insert column list only if your table was defined with the same columns in the same order as in the source query (is your case in the CSV file). If your table has columns declared in different order or it has some additional columns, you need to specify this list.
My file.sql has 50000 insert scripts, in that one are more insert scripts exection failed because of the value is too large for the column, then how we can find out which insert script got failed (which line number of insert script failed in the file).
I take it you want the missing data to be inserted after all?
1 Can you delete all data, change the table to hold larger values and run the script again?
2 Is there a unique key on the table? Then modify the table so it can hold larger values and run the script again. Only the data you do not already have will be inserted now.
3 Create the same table in another schema or database with the modified definition. Insert the data. Query the records where length of columns value > previous maximum. Generate insert statements only for these records and run these on the original (but now modified to hold larger values) table.
I have text file which looks like as below,
ID1~name1~city1~zipcode1~position1
ID2~name2~city2~zipcode2~position2
ID3~name3~city3~zipcode3~position3
ID4~name4~city4~zipcode4~position4
.
.
etc goes on...
This text file is the source file and I want split the file (~) and compare the table with ID.
If the value is not in the table, insert operation should perform.
If the id is available in the table but other column values are different then need to update the table.
If the id is not available in the text but available in the table then then the record should get deleted.
I did goggle it but i could find the below page,
https://www.experts-exchange.com/questions/27419804/VBScript-compare-differences-in-two-record-sets.html
Please help me how I can proceed with VBscript.
Whose leg you are trying to pull? Obviously the desired/resulting table is the input table, so use "load data infile" to import the file.
I have a BIRT Excel Report with 10 columns. I have a query which executes and brings the data for all the 10 columns.
However, based on one of the input parameters, i need to display just 8 columns. I am able to hide the remaining 2 columns but i would like to delete those 2 columns from the report so that user does not see the hidden columns.
I tried to change the query but i am unable to dynamically set the select parameters.
Is there a way either in Query or in BIRT to remove few columns based on an input condition.
You cannot delete the columns, but it's sufficient to hide them dynamically using the column's visibility expression. You can add an aggregation to the table, using the MAX function for the column data (let's call it max_name).
E.g. if your table column shows the DS column NAME and you want to hide the column if NAME is empty for all rows:
Add an aggration (let's call it MAX_NAME) to the table, with the aggregation function MAX and the expression NAME. Then in the visibility expression of the table column, use !row["MAX_NAME"] as the expression.
After drag and drop the dataset. Right click on column header and select the delete column option.
I have a table in Mac Numbers which has a column with checkbox. I am trying to copy only those rows in a second table which are check marked.
I also want to extend this solution to multiple tables; I will have multiple tables having a column with checkbox. I want to copy all those rows into a single table which are check marked.
I tried with LOOPUP function but it didn't help.
How can we do this?
I worked it out in 2 steps -
Used IF condition to put column data if checkbox is checked else put "NA".
Then put a filter on the new table to filter out all rows which has that column with values "NA".