I am completely new to JS, Ajax, etc and Scala JS. I have a small REST API, where i can send GET requests(e.g localhost:9000/stats?first=abc&second=cde&from=1482327698&to=1482327998)
How can i do it in Scala JS? I tried this:
val successButton = Button(ButtonStyle.success)("Plot", onclick.:=(Bootstrap.jsClick(_ => {
val subj1 = firstText.now
val subj2 = secondText.now
val now = System.currentTimeMillis / 1000
val from = now - 300
val url = s"http://127.0.0.1:9000/stats?first=$subj1&second=$subj2&from=$from&to=$now"
jQuery.get(url)
}))).render
But it gives me literally nothing. Thanks in advance
Related
I have a list of urls and I am trying to scrape each of them to output in a single data frame. my code is below:
res=[]
for link in url_list:
html = urlopen(link)
soup = BeautifulSoup(html,'html.parser')
headers = [th.getText() for th in soup.findAll('tr', limit=2)[1].findAll('th')]
headers = headers[1:]
rows = soup.findAll('tr')[2:]
player_stats = [[td.getText() for td in rows[i].findAll('td')]
for i in range(len(rows))]
stats = pd.DataFrame(player_stats, columns = headers)
I only get output for one web page. Why is that?
You are overwriting your dataframe each time in your for loop. If you have a lot of pages, you can collect the individual dataframes and concatenate them.
df_hold_list = []
for link in url_list:
html = urlopen(link)
soup = BeautifulSoup(html,'html.parser')
headers = [th.getText() for th in soup.findAll('tr', limit=2)[1].findAll('th')]
headers = headers[1:]
rows = soup.findAll('tr')[2:]
player_stats = [[td.getText() for td in rows[i].findAll('td')]
for i in range(len(rows))]
df_hold_list.append(pd.DataFrame(player_stats, columns = headers))
stats = pd.concat(df_hold_list)
I have made an interactive choropleth map with bokeh, and I'm trying to add active interactions using the dropdown widget (Select). However, most tutorials and SO questions about active interactions use ColumnDataSource, and not GeoJSONDataSource.
The issue is that GeoJSONDataSource doesn't have a .data method like ColumnDataSource does, so idk exactly how the syntax works when updating it.
My dataset is a dictionary in the form of city_dict = {'Amsterdam': <some data frame>, 'Antwerp': <some data frame>, ...}, where the dataframe is in geojson format. I have already confirmed that this format works when making glyphs.
def update(attr, old, new):
s_value = dropdown.value
p.title.text = '%s', s_value
new_src1 = make_dataset(s_value)
val1 = GeoJSONDataSource(new_src1)
r1.data_source = val1
where make_dataset is a function that transforms my original dataset into a dataset that can feed into the GeoJSONDataSource function. make_dataset requires a string (name of the city) to work eg. 'Amsterdam'. It works on passive interactions.
The main plot code (removed unnecessary stuff) is:
dropdown = Select(value='Amsterdam', options = cities)
controls = WidgetBox(dropdown)
initial_city = 'Amsterdam'
a = make_dataset(initial_city)
src1 = GeoJSONDataSource(a)
p = figure(title = 'Amsterdam', plot_height = 750 , plot_width = 900, toolbar_location = 'right')
r1 = p.patches('xs','ys', source = src1, fill_color = {'field' :'norm', 'transform' : color_mapper})
dropdown.on_change('value', update)
layout = row(controls, p)
curdoc().add_root(layout)
I've added the error I get. error handling message Message 'PATCH-DOC' (revision 1) content: {'events': [{'kind': 'ModelChanged', 'model': {'type': 'Select', 'id': '1147'}, 'attr': 'value', 'new': 'Antwerp'}], 'references': []}: ValueError("expected a value of type str, got ('%s', 'Antwerp') of type tuple",)
In Vue.js i use this part of code in computed to calculate cumule of amounts. It works good in LocalHost. But when i upload the project to a Web Server, this part of code isn't working.
Code:
personsWithAmount(){
const reducer = (accumulator, currentValue) => accumulator + currentValue.amount;
return this.persons.map((pers)=>{
let p=pers;
if(pers.usercashfloat.length===0){
p.totalAmount=0;
}else if(pers.usercashfloat.length===1){
p.totalAmount=pers.usercashfloat[0].amount;
}else{
window.cashfloat=pers.usercashfloat
p.totalAmount=pers.usercashfloat.reduce(reducer,0);
};
Result in LocalHost :
Array 1 =200
Array2 = 200
Result = 400
Resultat In Server
Array 1 =200
Array2 = 200
Result = 200200
Thanks
It seems like some string concatination is happening. Perhaps this works:
const reducer = (accumulator, currentValue) => Number(accumulator) + Number(currentValue.amount);
I am trying to download public comments and replies from the FACEBOOK public post by page.
my code is working until 5 Feb'18, Now it is showing below error for the "Replies".
Error in data.frame(from_id = json$from$id, from_name = json$from$name, :
arguments imply differing number of rows: 0, 1
Called from: data.frame(from_id = json$from$id, from_name = json$from$name,
message = ifelse(!is.null(json$message), json$message, NA),
created_time = json$created_time, likes_count = json$like_count,
comments_count = json$comment_count, id = json$id, stringsAsFactors = F)
please refer below code I am using.
data_fun=function(II,JJ,page,my_oauth){
test <- list()
test.reply<- list()
for (i in II:length(page$id)){
test[[i]] <- getPost(post=page$id[i], token = my_oauth,n= 100000, comments = TRUE, likes = FALSE)
if (nrow(test[[i]][["comments"]]) > 0) {
write.csv(test[[i]], file = paste0(page$from_name[2],"_comments_", i, ".csv"), row.names = F)
for (j in JJ:length(test[[i]]$comments$id)){
test.reply[[j]] <-getCommentReplies(comment_id=test[[i]]$comments$id[j],token=my_oauth,n = 100000, replies = TRUE,likes = FALSE)
if (nrow(test.reply[[j]][["replies"]]) > 0) {
write.csv(test.reply[[j]], file = paste0(page$from_name[2],"_replies_",i,"_and_", j, ".csv"), row.names = F)
}}}
}
Sys.sleep(10)}
Thanks For Your support In advance.
I had the very same problem as Facebook changed the api rules at the end of January. If you update your package with 'devtools' from Pablo Barbera's github, it should work for you.
I have amended my code (a little) and it works fine now for replies to comments.There is one frustrating thing though, is that Facebook dont appear to allow one to extract the user name. I have a pool of data already so I am now using that to train and predict gender.
If you have any questions and want to make contact - drop me an email at 'robert.chestnutt2#mail.dcu.ie'
By the way - it may not be an issue for you, but I have had challenges in the past writing the Rfacebook output to a csv. Saving output as an .RData file maintains the form a lot better
I am using spark streaming 1.1.0 locally (not in a cluster).
I created a simple app that parses the data (about 10.000 entries), stores it in a stream and then makes some transformations on it. Here is the code:
def main(args : Array[String]){
val master = "local[8]"
val conf = new SparkConf().setAppName("Tester").setMaster(master)
val sc = new StreamingContext(conf, Milliseconds(110000))
val stream = sc.receiverStream(new MyReceiver("localhost", 9999))
val parsedStream = parse(stream)
parsedStream.foreachRDD(rdd =>
println(rdd.first()+"\nRULE STARTS "+System.currentTimeMillis()))
val result1 = parsedStream
.filter(entry => entry.symbol.contains("walking")
&& entry.symbol.contains("true") && entry.symbol.contains("id0"))
.map(_.time)
val result2 = parsedStream
.filter(entry =>
entry.symbol == "disappear" && entry.symbol.contains("id0"))
.map(_.time)
val result3 = result1
.transformWith(result2, (rdd1, rdd2: RDD[Int]) => rdd1.subtract(rdd2))
result3.foreachRDD(rdd =>
println(rdd.first()+"\nRULE ENDS "+System.currentTimeMillis()))
sc.start()
sc.awaitTermination()
}
def parse(stream: DStream[String]) = {
stream.flatMap { line =>
val entries = line.split("assert").filter(entry => !entry.isEmpty)
entries.map { tuple =>
val pattern = """\s*[(](.+)[,]\s*([0-9]+)+\s*[)]\s*[)]\s*[,|\.]\s*""".r
tuple match {
case pattern(symbol, time) =>
new Data(symbol, time.toInt)
}
}
}
}
case class Data (symbol: String, time: Int)
I have a batch duration of 110.000 milliseconds in order to receive all the data in one batch. I believed that, even locally, the spark is very fast. In this case, it takes about 3.5sec to execute the rule (between "RULE STARTS" and "RULE ENDS"). Am I doing something wrong or this is the expected time? Any advise
So i was using case matching in allot of my jobs and it killed performance, more than when i introduced a json parser. Also try tweaking the batch time on the StreamingContext. It made quite a bit of difference for me. Also how many local workers do you have?