Calabash-android: getting error when trying to press on object with coordinates and using 'perform_action' command - ruby

I’m trying to use rect output with perform action command.
For example:
query("* text:’Hello’", :y)
[
[0] 226.0
]
Trying:
perform_action('long_press_coordinate',200,y)
And getting the error:
RuntimeError: Action 'long_press_coordinate' unsuccessful: Can not deserialize instance of java.lang.String[] out of END_OBJECT token
at [Source: java.io.StringReader#412a8480; line: 1, column: 61] (through reference chain: sh.calaba.instrumentationbackend.Command["arguments"])
Is it a syntax issue that I’m dealing with or is it much more?
How do I ‘’turn’ the y value to a regular number?

I found code that works:
y=query("* text:’Hello’", :y)
perform_action('long_press_coordinate',200,y[0])
Hope it is helping.

Related

Unexpected keyword argument 'unk_token'

When trying to load this tokenizer I am getting this error but I don't know why it can't take the ink_token strangely. Any ideas?
tokenizer = tokenizers.SentencePieceUnigramTokenizer(unk_token="", eos_token="", pad_token="")
----> 1 tokenizer = tokenizers.SentencePieceUnigramTokenizer(unk_token="", eos_token="", pad_token="")
TypeError: init() got an unexpected keyword argument 'unk_token'

changing CRS in GeoPandas

I'm trying to change the CRS of a geopandas dataframe. The current CRS is:
Name: unknown
Axis Info [ellipsoidal]:
- lon[east]: Longitude (degree)
- lat[north]: Latitude (degree)
Area of Use:
- undefined
Datum: World Geodetic System 1984
- Ellipsoid: WGS 84
- Prime Meridian: Greenwich
When I try dfTrans.to_crs('epsg:4326') I get the following error:
pyproj.exceptions.CRSError: Invalid projection: epsg:4326: (Internal Proj Error: proj_create: cannot build geodeticCRS 4326: SQLite error on SELECT name, ellipsoid_auth_name, ellipsoid_code, prime_meridian_auth_name, prime_meridian_code, area_of_use_auth_name, area_of_use_code, publication_date, deprecated FROM geodetic_datum WHERE auth_name = ? AND code = ?: no such column: publication_date)
For a simple command in pyproj, pyproj.CRS.from_epsg(4326), I get the same error:
File "pyproj/_crs.pyx", line 1738, in pyproj._crs._CRS.__init__
pyproj.exceptions.CRSError: Invalid projection: epsg:4326: (Internal Proj Error: proj_create: cannot build geodeticCRS 4326: SQLite error on SELECT name, ellipsoid_auth_name, ellipsoid_code, prime_meridian_auth_name, prime_meridian_code, area_of_use_auth_name, area_of_use_code, publication_date, deprecated FROM geodetic_datum WHERE auth_name = ? AND code = ?: no such column: publication_date)
I don't know enough to know what's going on, but it seems like there's an underlying function that calls a column that doesn't exist. Any ideas how to fix this or work around it?
I got that same error when using Proj-5.x. It seems that the 'publication_date' column is a Proj-6 or Proj-7 item (which both require SQLite.)

Wordcount Nonetype error pyspark-

I am trying to do some text analysis:
def cleaning_text(sentence):
sentence=sentence.lower()
sentence=re.sub('\'','',sentence.strip())
sentence=re.sub('^\d+\/\d+|\s\d+\/\d+|\d+\-\d+\-\d+|\d+\-\w+\-\d+\s\d+\:\d+|\d+\-\w+\-\d+|\d+\/\d+\/\d+\s\d+\:\d+',' ',sentence.strip())# dates removed
sentence=re.sub(r'(.)(\/)(.)',r'\1\3',sentence.strip())
sentence=re.sub("(.*?\//)|(.*?\\\\)|(.*?\\\)|(.*?\/)",' ',sentence.strip())
sentence=re.sub('^\d+','',sentence.strip())
sentence = re.sub('[%s]' % re.escape(string.punctuation),'',sentence.strip())
cleaned=' '.join([w for w in sentence.split() if not len(w)<2 and w not in ('no', 'sc','ln') ])
cleaned=cleaned.strip()
if(len(cleaned)<=1):
return "NA"
else:
return cleaned
org_val=udf(cleaning_text,StringType())
df_new =df.withColumn("cleaned_short_desc", org_val(df["symptom_short_description_"]))
df_new =df_new.withColumn("cleaned_long_desc", org_val(df_new["long_description"]))
longWordsDF = (df_new.select(explode(split('cleaned_long_desc',' ')).alias('word'))
longWordsDF.count()
I get the following error.
File "<stdin>", line 2, in cleaning_text
AttributeError: 'NoneType' object has no attribute 'lower'
I want to perform word counts but any kind of aggregation function is giving me an error.
I tried following things:
sentence=sentence.encode("ascii", "ignore")
Added this statement in the cleaning_text function
df.dropna()
Its still giving the same issue, I do not know how to resolve this issue.
It looks like you have null values in some columns. Add an if at the beginning of cleaning_text function and the error will disappear:
if sentence is None:
return "NA"

How to read json data in pig?

I have the following type of json file:
{"employees":[
{"firstName":"John", "lastName":"Doe"},
{"firstName":"Anna", "lastName":"Smith"},
{"firstName":"Peter", "lastName":"Jones"}
]}
I am trying to execute the following pig script to load json data
A = load 'pigdemo/employeejson.json' using JsonLoader ('employees:{(firstName:chararray)},{(lastName:chararray)}');
getting error!!
Unable to recreate exception from backed error: Error:
org.codehaus.jackson.JsonParseException: Unexpected end-of-input:
expected close marker for ARRAY (from [Source:
java.io.ByteArrayInputStream#1553f9b2; line: 1, column: 1]) at
[Source: java.io.ByteArrayInputStream#1553f9b2; line: 1, column: 29]
First the reason that you see Unexpected end-of-input is because each recode should be in 1 line - like this :
{"employees":[{"firstName":"John", "lastName":"Doe"}, {"firstName":"Anna", "lastName":"Smith"}, {"firstName":"Peter", "lastName":"Jones"}]}
Now - since each row is employees list run the next command
A = load '$flurryData' using JsonLoader ('employees:bag {t:tuple(firstName:chararray, lastName:chararray)}');
describe A;
dump A;
Give the next output
A: {employees: {t: (firstName: chararray,lastName: chararray)}}
({(John,Doe),(Anna,Smith),(Peter,Jones)})
Hope this help !

'SparkContext' object has no attribute 'textfile'

I tried loading a file by using following code:
textdata = sc.textfile('hdfs://localhost:9000/file.txt')
Error message:
AttributeError: 'SparkContext' object has no attribute 'textfile'
It is sc.textFile(...) with a capital F.
You can inspect the API of SparkContext here.

Resources