Finding a conditional min date on tableau - filter

I have a database with lots of records that have a date
I want to show the minimum date where there is no different booking type after that date
With these fields
Date, Booking type, Id
ID | Date | Booking type
123 01/04 A
123 01/05 B
123 01/06 A
123 01/07 A
So I would only want to show record on date 01/06 for ID 123 booking type 'A' as there has not been a different booking type after that
At the moment I can only get date 01/04 for booking type A id 123

First you calculate how many distinct Booking Type there is for each ID and Date
{FIXED [ID], [Date]: COUNTD([Booking Type])}
Once done, you can do a filter where SUM(....) > 1

Related

The ambiguity w.r.t date field in Dim_time

I have come across a fact table fact_trips - composed of columns like
driver_id,
vehicle_id,
date ( in the int form - 'YYYYMMDD')
timestamp (in milliseconds - bigint )
miles,
time_of_trip
I have another dim_time - composed of columns like
date ( in the int form - 'YYYYMMDD'),
timestamp (in milliseconds - bigint ),
month,
year,
day_of_week
day
Now when I want to see the trips grouped based on year, I have to join the two tables based on timestamp (in bigint) and then group by year from dim_time.
Why the hell do we keep date in int form then? Because ultimately, I have to join on timestamp. What needs to be changed?
Also, the dim_time does not have a primary key, hence there are multiple entries for the same date. So, when I join the tables, I get more rows in return than expected.
You should have 2 Dim tables:
DIM_DATE: PK = YYYYMMDD
DIM_TIME: PK = number. Will hold the same number of records as however many milliseconds there are in a day (assuming you are holding time at the millisecond grain rather than second, minute, etc)

How to retrieve workflow attribute values from workflow table?

I have a situation where in I need to take the values from table column which has data based on one of the column in same table.
There are two column values like that which is required to compare with another table.
Scenario:
Column 1 query:
SELECT text_value
FROM WF_ITEM_ATTRIBUTE_VALUES
WHERE name LIKE 'ORDER_ID' --AND number_value IS NOT NULL
AND Item_type LIKE 'ABC'
this query returns 14 unique records
Column 2 query:
SELECT number_value
FROM WF_ITEM_ATTRIBUTE_VALUES
WHERE name LIKE 'Source_ID' --AND number_value IS NOT NULL
AND Item_type LIKE 'ABC'
this also returns 14 records
and order_id of column 1 query is associated with source_id of column 2 query using this two column values i want to compare 14 records combined order_id, source_id with another table column i.e. Sales_tbl
columns sal_order_id, sal_source_id
Sample Data from WF_ITEM_ATTRIBUTE_VALUES:
Note: same data in the sales_tbl table but order_id is sal_order_id and sal_source_id
Order_id
204994 205000 205348 198517 198176 196856 204225 205348 203510 206528 196886 198971 194076 197940
Source_id
92262138 92261783 92262005 92262615 92374992 92375051 92374948 92375000 92375011 92336793 92374960 92691360 92695445 92695880
Desired O/p based on comparison:
Please help me in writing the query

Convert value while inserting into HIVE table

i have created bucketed table called emp_bucket into 4 buckets clustered on salary column. The structure of the table is as below:
hive> describe Consultant_Table_Bucket;
OK
id int
age int
gender string
role string
salary double
Time taken: 0.069 seconds, Fetched: 5 row(s)
I also have a staging table from where i can insert data into the above bucketed table. Below is the sample data in the staging table:
id age Gender role salary
-----------------------------------------------------
938 38 F consultant 55038.0
939 26 F student 33319.0
941 20 M student 97229.0
942 48 F consultant 78209.0
943 22 M consultant 77841.0
My requirement is to load data into the bucketed table for those employees whose salary is greater than 10,000 and while loading i have to convert "consultant" role to BigData consultant role.
I know how to insert data into my bucketed table using the select command, but need some guidance how can the consultant value in the role column above can be changed to BigData consultant while inserting.
Any help appreciated
Based on your insert, you just need to work on the role part of your select:
INSERT into TABLE bucketed_user PARTITION (salary)
select
id
, age
, gender
, if(role='consultant', 'BigData consultant', role) as role
, salary
FROM
stage_table
where
salary > 10000
;

select value form one table update value for fetched value in single query

I have two tables request and details.request table id is foreign key for details table I want I have price(321) and request_id(1819AM002) value.I want to fetch value id using request_id in request table update price field value in details table in single query.Is it possible to achieve in single query
request table
id request_id name type
1 1819AM001 XXX A
2 1819AM002 YYY A
Details table
id request_id price
1 2 133
Try this:
DB::table('details')
->join('request', 'details.request_id', 'request.id')
->where('request.request_id', '1819AM002')
->update(['price' => 321]);

ORACLE SQL one to many relationship?

Let's see if I can try to explain what I'm trying to do.
I have a table where each row has a begin date (begin_date) and end date (end_date). this table has a key (person_key)
I have another table where that person_key could have multiple entries each with a begin_date and end_date and an associated value.
example:
Table 1
key begin_date end_date
123 1/1/2016 1/31/2016
123 2/1/2016 2/29/2016
123 3/1/2016 3/31/2016
Table 2
key begin_date end_date value
123 1/15/2016 2/16/2016 X
123 2/17/2016 12/31/2099 Y
What I want to be able to do in SQL is write a query that will produce the following results:
Table 3
key begin_date end_date value
123 1/1/2016 1/31/2016 X
123 2/1/2016 2/16/2016 X
123 2/17/2016 2/29/2016 Y
123 3/1/2016 3/31/2016 Y
This may be way too involved as a simple solution but just looking for some guidance!

Resources