How to configurate schema graphqls input can select mutiple type with same field ? (Graphql, Spring Boot) - spring

I have an input like this:
filter(name: String, value: "abc")
filter(name: String, value: 1)
filter(name: String, value: [1,2,3,4])
filter(name: String, value: ["a","b","c","d"])
How to config schema.graphqls so it can accept like this:
input filter{
name: String
value: (can be mutiple base on input Int, String, Float, Boolean)
}

Related

Set Parent filed value after resolving a child type values in GraphQL

There are two sub types as follows,
type Test1 {
id: String
flag: Boolean
}
type Test2 {
id: String
flag: Boolean
}
The main defined type as follows,
type Test {
id: String
flag: Boolean
test1: Test1
test2: Test2
}
I have wrote two separate resolvers for the two sub types above and a resolver for main type.
I want to set the flag field value inside main type (Test) by using Test1 and Test2 type's flag fields as a OR condition.
I would like to appreciate if someone can help me to resolve this.

Spark repartitionAndSortWithinPartitions with tuples

I'm trying to follow this example to partition hbase rows: https://www.opencore.com/blog/2016/10/efficient-bulk-load-of-hbase-using-spark/
However, I have data already stored in (String, String, String) where the first is the rowkey, second is column name, and third is column value.
I tried writing an implicit ordering to achieve the OrderedRDD implicit
implicit val caseInsensitiveOrdering: Ordering[(String, String, String)] = new Ordering[(String, String, String)] {
override def compare(x: (String, String, String), y: (String, String, String)): Int = ???
}
but repartitionAndSortWithinPartitions is still not available. Is there a way I can use this method with this tuple?
RDD must have key and value, not only values, for ex.:
val data = List((("5", "6", "1"), (1)))
val rdd : RDD[((String, String, String), Int)] = sparkContext.parallelize(data)
implicit val caseInsensitiveOrdering = new Ordering[(String, String, String)] {
override def compare(x: (String, String, String), y: (String, String, String)): Int = 1
}
rdd.repartitionAndSortWithinPartitions(..)

Is there a way to list keys in context.Context?

So, I have a context.Context(https://golang.org/pkg/context/) variable with me, is there a way I can list all the keys this variable holds?
It is possible to list the internals of context.Context using unsafe reflection and using that information to figure out the keys and/or see if the information you want is in the context.
There are some pitfall such as if the context implementation returns a hardcoded value for a key it wont show up here, and it can be quite unclear on how to actually access the values using keys.
This is nothing I run in production. But in my case I needed to be able to inspect the context.Context to understand better what information it contains.
func printContextInternals(ctx interface{}, inner bool) {
contextValues := reflect.ValueOf(ctx).Elem()
contextKeys := reflect.TypeOf(ctx).Elem()
if !inner {
fmt.Printf("\nFields for %s.%s\n", contextKeys.PkgPath(), contextKeys.Name())
}
if contextKeys.Kind() == reflect.Struct {
for i := 0; i < contextValues.NumField(); i++ {
reflectValue := contextValues.Field(i)
reflectValue = reflect.NewAt(reflectValue.Type(), unsafe.Pointer(reflectValue.UnsafeAddr())).Elem()
reflectField := contextKeys.Field(i)
if reflectField.Name == "Context" {
printContextInternals(reflectValue.Interface(), true)
} else {
fmt.Printf("field name: %+v\n", reflectField.Name)
fmt.Printf("value: %+v\n", reflectValue.Interface())
}
}
} else {
fmt.Printf("context is empty (int)\n")
}
}
Examples:
func Ping(w http.ResponseWriter, r *http.Request) {
printContextInternals(r.Context(), false)
/* Prints
Fields for context.valueCtx
context is empty (int)
field name: key
value: net/http context value http-server
field name: val
value: &{Addr::20885 Handler:0xc00001c000 TLSConfig:0xc000001c80 ReadTimeout:0s ReadHeaderTimeout:0s WriteTimeout:0s IdleTimeout:0s MaxHeaderBytes:0 TLSNextProto:map[h2:0x12db010] ConnState:<nil> ErrorLog:<nil> BaseContext:<nil> ConnContext:<nil> disableKeepAlives:0 inShutdown:0 nextProtoOnce:{done:1 m:{state:0 sema:0}} nextProtoErr:<nil> mu:{state:0 sema:0} listeners:map[0xc00015a840:{}] activeConn:map[0xc000556fa0:{}] doneChan:<nil> onShutdown:[0x12e9670]}
field name: key
value: net/http context value local-addr
field name: val
value: [::1]:20885
field name: mu
value: {state:0 sema:0}
field name: done
value: 0xc00003c2a0
field name: children
value: map[context.Background.WithValue(type *http.contextKey, val <not Stringer>).WithValue(type *http.contextKey, val [::1]:20885).WithCancel.WithCancel:{}]
field name: err
value: <nil>
field name: mu
value: {state:0 sema:0}
field name: done
value: <nil>
field name: children
value: map[]
field name: err
value: <nil>
field name: key
value: 0
field name: val
value: map[]
field name: key
value: 1
field name: val
value: &{handler:0x151cf50 buildOnly:false name: err:<nil> namedRoutes:map[] routeConf:{useEncodedPath:false strictSlash:false skipClean:false regexp:{host:<nil> path:0xc0003d78f0 queries:[]} matchers:[0xc0003d78f0 [GET POST]] buildScheme: buildVarsFunc:<nil>}}
*/
printContextInternals(context.Background(), false)
/* Prints
Fields for context.emptyCtx
context is empty (int)
*/
}
No there is no way to list all the keys of context.Context. Because that type is just an interface. So what does this mean?
In general a variables can hold a concrete type or an interface. A variable with an interface type does not have any concrete type informations on it. So it would makes no difference if the interface is empty (interface{}) or context.Context. Because they could be a lot of different types which are implementing that interface. The variable does not have a concrete type. It is just something abstract.
If you use reflection you could observer the fields and all the methods of the type which is set to that variable (with interface type). But the logic how the method Value(key interface{}) interface{} is implemented is not fixed. It does not have to be a map. You could also make an implementation with slices, database, an own type of an hash table, ...
So there is no general way to list all the values.

Only STRING defined columns are loaded in HIVE i.e. columns with int and double are NULL

Only STRING defined columns are loaded in HIVE i.e. columns with int and double are NULL
Create table command
create table A(
id STRING,
member_id STRING,
loan_amnt DOUBLE,
funded_amnt DOUBLE,
`funded_amnt_inv` DOUBLE,
`term` STRING,
`int_rate` STRING,
`installment` DOUBLE,
`grade` STRING,
`sub_grade` STRING,
`emp_title` STRING,
`emp_length` STRING,
`home_ownership` STRING,
`nnual_inc` INT,
`verification_status` STRING,
`issue_d` STRING,
`loan_status` STRING,
`pymnt_plan` STRING,
`url` STRING,
`desc` STRING,
`purpose` STRING,
`title` STRING,
`zip_code` STRING,
`addr_state` STRING,
`dti` DOUBLE,
`delinq_2yrs` INT,
`earliest_cr_line` STRING,
`inq_last_6mths` STRING,
`mths_since_last_delinq` STRING,
`mths_since_last_record` STRING,
`open_acc` INT,
`pub_rec` INT,
`revol_bal` INT,
`revol_util` STRING,
`total_acc` INT,
`initial_list_status` STRING,
`out_prncp` DOUBLE,
`out_prncp_inv` DOUBLE,
`total_pymnt` DOUBLE,
`total_pymnt_inv` DOUBLE,
`total_rec_prncp` DOUBLE,
`total_rec_int` DOUBLE,
`total_rec_late_fee` DOUBLE,
`recoveries` DOUBLE,
`collection_recovery_fee` DOUBLE,
`last_pymnt_d` STRING,
`last_pymnt_amnt` DOUBLE,
`next_pymnt_d` STRING,
`last_credit_pull_d` STRING,
`collections_12_mths_ex_med` INT,
`mths_since_last_major_derog` STRING,
`policy_code` STRING,
`application_type` STRING,
`annual_inc_joint` STRING,
`dti_joint` STRING,
`verification_status_joint` STRING,
`acc_now_delinq` STRING,
`tot_coll_amt` STRING,
`tot_cur_bal` STRING,
`open_acc_6m` STRING,
`open_il_6m` STRING,
`open_il_12m` STRING,
`open_il_24m` STRING,
`mths_since_rcnt_il` STRING,
`total_bal_il` STRING,
`il_util` STRING,
`open_rv_12m ` STRING,
`open_rv_24m` STRING,
`max_bal_bc` STRING,
`all_util` STRING,
`total_credit_rv` STRING,
`inq_fi` STRING,
`total_fi_tl` STRING,
`inq_last_12m` STRING
)
ROW FORMAT delimited
fields terminated by ','
STORED AS TEXTFILE;
Loading data into table A
load data local inpath '/home/cloudera/Desktop/Project-3/1/LoanStats3a.txt' into table A;
Select data
hive> SELECT * FROM A LIMIT 1;
Output
"1077501" "1296599" NULL NULL NULL " 36 months" "
10.65%" NULL "B" "B2" "" "10+ years" "RENT" NULL "Verified" "Dec-2011" "Fully
Paid" "n" "https://www.lendingclub.com/browse/loanDetail.action?loan_id=1077501" "
Borrower added on 12/22/11 > I need to upgrade my business
technologies." "credit_card" "Computer" "860xx" "AZ" NULL NULL "Jan-1985" "1" "" "" NULL NULL NULL "83.7%"NULL "f" NULL NULL NULL NULL NULL NULL NULL NULL NULL "Jan-2015" NULL "" "Dec-2015" NULL "" "1" "INDIVIDUAL"
"" "" "" "0" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ""
I found the solution :-
create table stat2(id String, member_id INT, loan_amnt FLOAT, funded_amnt FLOAT, funded_amnt_inv FLOAT, term String, int_rate String, installment FLOAT, grade String, sub_grade String, emp_title String, emp_length String, home_ownership String, annual_inc FLOAT, verification_status String, issue_d date, loan_status String, pymnt_plan String, url String, descp String, purpose String, title String, zip_code String, addr_state String, dti FLOAT, delinq_2yrs FLOAT, earliest_cr_line String, inq_last_6mths FLOAT, mths_since_last_delinq FLOAT, mths_since_last_record FLOAT, open_acc FLOAT, pub_rec FLOAT, revol_bal FLOAT, revol_util String, total_acc FLOAT, initial_list_status String, out_prncp FLOAT, out_prncp_inv FLOAT, total_pymnt FLOAT, total_pymnt_inv FLOAT, total_rec_prncp FLOAT, total_rec_int FLOAT, total_rec_late_fee FLOAT, recoveries FLOAT, collection_recovery_fee FLOAT,
last_pymnt_d String, last_pymnt_amnt FLOAT, next_pymnt_d String, last_credit_pull_d String, collections_12_mths_ex_med FLOAT, mths_since_last_major_derog FLOAT, policy_code FLOAT, application_type String, annual_inc_joint FLOAT, dti_joint FLOAT, verification_status_joint String, acc_now_delinq FLOAT, tot_coll_amt FLOAT, tot_cur_bal FLOAT, open_acc_6m FLOAT, open_il_6m FLOAT, open_il_12m FLOAT, open_il_24m FLOAT, mths_since_rcnt_il FLOAT, total_bal_il FLOAT, il_util FLOAT, open_rv_12m FLOAT, open_rv_24m FLOAT, max_bal_bc FLOAT, all_util FLOAT, total_rev_hi_lim FLOAT, inq_fi FLOAT, total_cu_tl FLOAT, inq_last_12m FLOAT)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' with serdeproperties (
"separatorChar" = ",",
"quoteChar" = "\""
)
STORED AS TEXTFILE tblproperties ("skip.header.line.count"="2",
"skip.footer.line.count"="4");
It seems that your CSV contains quotes around the individual fields. The surrounding quotes are not supported by HIVE and as a result they become part of the fields. In case of string fields, the quotes become part of the string. In case of numeric fields, the quotes make the field an invalid number, resulting in NULLs.
See csv-serde
for a serde that supports quotes in CSV files.

Deletion of folder on Amazon S3, while creating external table

We are getting very unusual behavior on our S3 Bucket and this behavior is not consistent. So, we are not able to pin point the problem. Now coming to the issue i fire one query(creation of external table). Which leads to deletion of the folder which i was pointing in external table. And this has happened 3-4 time to us. So, could you please explain this behaviour. For you convenience i am attaching the external table query and the logs which operation are being performed on S3 bucket.
Query:
create table apr_2(date_local string, time_local string,s_computername string,c_ip string,s_ip string,s_port string,s_sitename string, referer string, localfile string, TimeTakenMS string, status string, w3status string, sc_substatus string, uri string, qs string, sc_bytes string, cs_bytes string, cs_username string, cs_User_Agent string, s_proxy string, c_protocol string, cs_version string, cs_method string, cs_Cookie string, cs_Host string, w3wpbytes string, RequestsPerSecond string, CPU_Utilization string, BeginRequest_UTC string, EndRequest_UTC string, time string, logdate string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' location 's3://logs/apr_2_com'
logs:
REST.DELETE.OBJECT logs/apr_2_com/000002.tar.gz
REST.DELETE.OBJECT logs/apr_2_com/000001.tar.gz
Try using this syntax -
create external table if not exists apr_2(date_local string, time_local string,s_computername string,c_ip string,s_ip string,s_port string,s_sitename string, referer string, localfile string, TimeTakenMS string, status string, w3status string, sc_substatus string, uri string, qs string, sc_bytes string, cs_bytes string, cs_username string, cs_User_Agent string, s_proxy string, c_protocol string, cs_version string, cs_method string, cs_Cookie string, cs_Host string, w3wpbytes string, RequestsPerSecond string, CPU_Utilization string, BeginRequest_UTC string, EndRequest_UTC string, time string, logdate string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' location 's3://logs/apr_2_com'

Resources