I have a column $table->date('start_date'); , and I want to store data and time so I will need timestamp.
I already have some date in my current table, so I am not sure what to do without deleting existing data.
I find some solutions (on changing data type) that involve doctrine, but from what I read, Supported Laravel Versions is 6.x and I am using 7.
Any solutions?
According to the PostgreSQL documentation you can convert a date to timestamp by using to_timestamp() function
https://www.postgresql.org/docs/8.2/functions-formatting.html
You have two choices :
Create a new column and create a script which convert all entries in the right format
Create a script which update all the datas you need to into the already existing table
Related
I am trying to delete a set of data in the target table based on a column (year) from the lookup in IICS (Informatica Cloud).
I want to solve this problem using pre/post sql commands but the constraint is I can't pass year column to my query.
I tried this:
delete from sample_db.tbl_emp where emp_year = {year}
I want to delete all the employees in a specific year i get from lookup return
For Ex:
I got year as '2019', all the records in table sample_db.tbl_emp containing emp_year=2019 must be deleted.
I am not sure how this works in informatica cloud.
Any leads would be helpful.
How are you getting the year value? A pre/post SQL may not be the way to go unless you need to do this as part of another transformation, i.e., before or after the transformation runs. Also, does your org only have ICDI, or also ICAI? ICAI may be a better solution depending on the value is being provided.
The following steps would help you achieve this.
Create an input-output parameter in your mapping.
Assign the result of your lookup in an expression transformation to the parameter using SetMaxVariable
Use the parameter in your target pre SQL as
delete from sample_db.tbl_emp where emp_year = $$parameter
Let me know if you have any further questions
I am trying to automate an UPDATE query based off one columns data and apply that same field to another.
See screenshot:
In this, what I am trying to do is update PCODE from a value of 0 to that of the value shown in SI_PCODE. This would need to be updated by the unique variable of the SALES_QUOTE_ID. I currently do this manually with excel using
update sales_quote_contact set pcode= where sales_quote_id=;
but want to run this automatically through a batch job I have.
I am using Laravel 5.1. I want to get milliseconds precision for created_at that I get from the database.
I have tried the solution given for this question. But there isn't any change. Also I tried raw query. Still I get only up to seconds. Is there any way to get millisecond precision?
It may be a problem about your column type. Assuming you are using MySQL, you should manually add (or modify) a column with type TIMESTAMP(3) on your table:
\DB::statement("ALTER TABLE `my_table` ADD COLUMN my_col TIMESTAMP(3) NULL");
More information here: https://dev.mysql.com/doc/refman/5.7/en/fractional-seconds.html
I'm trying to build a star schema in Oracle 12c. In my case my data source is not a relational database but a single excel/csv file which is populated via a google form, which means I don't have any sort of reference from a source system such as auto incremental keys/ids. Now what would be the best approach to build a star schema given this condition?
File row sample:
<submitted timestamp>,<submitted by user>,<region>,<country>,<branch>,<branch location>,<branch area>,<branch type>,<branch name>,<branch private? yes/no value>,<the following would be all "fact" values (measurements),...,...,...
In case i wanted to build a "branch" dimension, how would I handle updates/inserts after the first load into the dimension table?
Thought solution so far:
I had thought of making a concatenated string "key" with the branch values, which would make it unique (underscore would be the "glue" to concatenate the values), eg:
<region>_<country>_<branch>_<branch location> as branch_key
I would insert all the distinct branches into a staging table, including they branch_key column for each one of them, then when trying to load into the dimension I could compare which key does not exists yet in my dimension table and then insert it. As for updates, I'm a bit stuck on how to handle that, I had thought of having another file mapping which branches are active having a expiration date column. Basically trying to simulate what I could do having the data in a database instead of CSV files.
This is all I can think of so far, do you have any other recommendations/ideas on how to implement this? Take on consideration that the data source cannot as in I have to read these csv files, since data is not stored anywhere else.
Thank you.
Here's the scenario:
Say you have a Hive Table that stores twitter data.
Say it has 5 columns. One column being the Text Data.
Now How do you add a 6th column that stores the sentiment value from the Sentiment Analysis of the twitter Text data. I plan to use the Sentiment Analysis API like Sentiment140 or viralheat.
I would appreciate any tips on how to implement the "derived" column in Hive.
Thanks.
Unfortunately, while the Hive API lets you add a new column to your table (using ALTER TABLE foo ADD COLUMNS (bar binary)), those new columns will be NULL and cannot be populated. The only way to add data to these columns is to clear the table's rows and load data from a new file, this new file having that new column's data.
To answer your question: You can't, in Hive. To do what you propose, you would have to have a file with 6 columns, the 6th already containing the sentiment analysis data. This could then be loaded into your HDFS, and queried using Hive.
EDIT: Just tried an example where I exported the table as a .csv after adding the new column (see above), and popped that into M$ Excel where I was able to perform functions on the table values. After adding functions, I just saved and uploaded the .csv, and rebuilt the table from it. Not sure if this is helpful to you specifically (since it's not likely that sentiment analysis can be done in Excel), but may be of use to anyone else just wanting to have computed columns in Hive.
References:
https://cwiki.apache.org/Hive/gettingstarted.html#GettingStarted-DDLOperations
http://comments.gmane.org/gmane.comp.java.hadoop.hive.user/6665
You can do this in two steps without a separate table. Steps:
Alter the original table to add the required column
Do an "overwrite table select" of all columns + your computed column from the original table into the original table.
Caveat: This has not been tested on a clustered installation.