I am trying to apply discount value in Quick book but for some reason discount does not apply in Quickbook. Can you please guide me how can i resolve this issue?
https://github.com/ruckus/quickbooks-ruby/issues/267
Discount value code:
discount_amount = discount_value
discount_line_item = Quickbooks::Model::InvoiceLineItem.new
discount_line_item.amount = discount_amount#149
discount_line_item.discount_item! do |detail|
detail.discount_account_id = 48
end
Discount percentage code:
discount_line_item = Quickbooks::Model::InvoiceLineItem.new
discount_line_item.amount = discount_percentage_value#149
discount_line_item.discount_item! do |detail|
detail.discount_percent = discount_percentage_value#60
detail.percent_based = percent_based
detail.discount_account_id = 48
end
invoice.line_items << discount_line_item
I did a google search and fount this link that may be useful in solving your issue. Otherwise, there's no way we can help you; is the application timing out, is the data not being posted to quickbooks, is there an error in your log file?
Related
When I run the following code
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
text = r"""
As checked Dis is not yet on boarded to ARB portal, hence we cannot upload the invoices in portal
"""
questions = [
"Dis asked if it is possible to post the two invoice in ARB.I have not access so I wanted to check if you would be able to do it.",
]
for question in questions:
inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer_start_scores, answer_end_scores = model(**inputs)
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}\n")
The answer that I get here is:
Question: Dis asked if it is possible to post the two invoice in ARB.I have not access so I wanted to check if you would be able to do it.
Answer: dis is not yet on boarded to ARB portal
How do I get a score for this answer? Score here is very similar to what is I get when I run Question-Answer pipeline .
I have to take this approach since Question-Answer pipeline when used is giving me Key Error for the below code
from transformers import pipeline
nlp = pipeline("question-answering")
context = r"""
As checked Dis is not yet on boarded to ARB portal, hence we cannot upload the invoices in portal.
"""
print(nlp(question="Dis asked if it is possible to post the two invoice in ARB?", context=context))
This is my attempt to get the score. It appears that I cannot figure out what feature.p_mask. So I could not remove the non-context indexes that contribute to the softmax at the moment.
# ... assuming imports and question and context
model_name="deepset/roberta-base-squad2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
inputs = tokenizer(question, context,
add_special_tokens=True,
return_tensors='pt')
input_ids = inputs['input_ids'].tolist()[0]
outputs = model(**inputs)
# used to compute score
start = outputs.start_logits.detach().numpy()
end = outputs.end_logits.detach().numpy()
# from source code
# Ensure padded tokens & question tokens cannot belong to the set of candidate answers.
#?? undesired_tokens = np.abs(np.array(feature.p_mask) - 1) & feature.attention_mask
# Generate mask
undesired_tokens = inputs['attention_mask']
undesired_tokens_mask = undesired_tokens == 0.0
# Make sure non-context indexes in the tensor cannot contribute to the softmax
start_ = np.where(undesired_tokens_mask, -10000.0, start)
end_ = np.where(undesired_tokens_mask, -10000.0, end)
# Normalize logits and spans to retrieve the answer
start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))
end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))
# Compute the score of each tuple(start, end) to be the real answer
outer = np.matmul(np.expand_dims(start_, -1), np.expand_dims(end_, 1))
# Remove candidate with end < start and end - start > max_answer_len
max_answer_len = 15
candidates = np.tril(np.triu(outer), max_answer_len - 1)
scores_flat = candidates.flatten()
idx_sort = [np.argmax(scores_flat)]
start, end = np.unravel_index(idx_sort, candidates.shape)[1:]
end += 1
score = candidates[0, start, end-1]
start, end, score = start.item(), end.item(), score.item()
print(tokenizer.decode(input_ids[start:end]))
print(score)
See more source code
I am reviewing huggingface's version of Albert.
However, I cannot find any code or comment about SOP.
I can find NSP(Next Sentence Prediction) implementation from modeling_from src/transformers/modeling_bert.py.
if masked_lm_labels is not None and next_sentence_label is not None:
loss_fct = CrossEntropyLoss()
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), masked_lm_labels.view(-1))
next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))
total_loss = masked_lm_loss + next_sentence_loss
outputs = (total_loss,) + outputs
Is SOP inherited from here with SOP-style labeling? or Is there anything I am missing?
The sentence order loss is here:
sentence_order_loss = loss_fn(y_true=sentence_order_label, y_pred=sentence_order_reduced_logits)
It's just a cross entropy loss.
I am trying to download public comments and replies from the FACEBOOK public post by page.
my code is working until 5 Feb'18, Now it is showing below error for the "Replies".
Error in data.frame(from_id = json$from$id, from_name = json$from$name, :
arguments imply differing number of rows: 0, 1
Called from: data.frame(from_id = json$from$id, from_name = json$from$name,
message = ifelse(!is.null(json$message), json$message, NA),
created_time = json$created_time, likes_count = json$like_count,
comments_count = json$comment_count, id = json$id, stringsAsFactors = F)
please refer below code I am using.
data_fun=function(II,JJ,page,my_oauth){
test <- list()
test.reply<- list()
for (i in II:length(page$id)){
test[[i]] <- getPost(post=page$id[i], token = my_oauth,n= 100000, comments = TRUE, likes = FALSE)
if (nrow(test[[i]][["comments"]]) > 0) {
write.csv(test[[i]], file = paste0(page$from_name[2],"_comments_", i, ".csv"), row.names = F)
for (j in JJ:length(test[[i]]$comments$id)){
test.reply[[j]] <-getCommentReplies(comment_id=test[[i]]$comments$id[j],token=my_oauth,n = 100000, replies = TRUE,likes = FALSE)
if (nrow(test.reply[[j]][["replies"]]) > 0) {
write.csv(test.reply[[j]], file = paste0(page$from_name[2],"_replies_",i,"_and_", j, ".csv"), row.names = F)
}}}
}
Sys.sleep(10)}
Thanks For Your support In advance.
I had the very same problem as Facebook changed the api rules at the end of January. If you update your package with 'devtools' from Pablo Barbera's github, it should work for you.
I have amended my code (a little) and it works fine now for replies to comments.There is one frustrating thing though, is that Facebook dont appear to allow one to extract the user name. I have a pool of data already so I am now using that to train and predict gender.
If you have any questions and want to make contact - drop me an email at 'robert.chestnutt2#mail.dcu.ie'
By the way - it may not be an issue for you, but I have had challenges in the past writing the Rfacebook output to a csv. Saving output as an .RData file maintains the form a lot better
I am trying to make an app for vacation requests but i am facing a problem.I have e model called VacationRequest and a view of VacationRequest where the result will be shown.
VacationRequest.rb model
def skip_holidays
count1 = 0
special_days = Date.parse("2017-05-09", "2017-05-12")
special_days.each do |sd|
if ((start_data..end_data) === sd)
numero = (end_data - start_data).to_i
numro1 = numero - sd
else
numero = (end_data - start_data).to_i
end
end
end
VacationRequest show.htm.erb
here i called the method from model
#vacation_request.skip_holidays
this is showing errors and is not working. Please help me with that!
My approach to this would be the following:
def skip_holidays
special_days = ["2017-05-09", "2017-05-12"].map(&:to_date)
accepted_days = []
refused_days = []
(start_data..end_data).each do |requested_date|
accepted_days << requested_date unless special_days.include?(requested_date)
refused_days << requested_date if special_days.include?(requested_date)
end
accepted_day_count = accepted_days.count
refused_days_count = refused_days.count
end
This way you iterated over all requested dates (the range), checked if the date is a special day - If so, refuse it, otherwise accept it.
In the end you can display statistics about accepted and refused dates.
You cannot create a date range by passing multiple argument to the constructor. Instead call the constructor twice to create a range:
special_days = Date.parse("2017-05-09") .. Date.parse("2017-05-12")
Or if instead you want only the two dates, do:
special_days = ["2017-05-09", "2017-05-12"].map &:to_date
This Date.parse("2017-05-09", "2017-05-12") don`t create a range, only return the first params parsed:
#irb
Date.parse("2017-05-09", "2017-05-12")
=> Tue, 09 May 2017
You can do it this way:
initial = Date.parse("2017-05-09")
final = Date.parse("2017-05-12")
((initial)..final).each do |date|
#rules here.
end
My code is the following. I was told fminsearch would solve this faster. I checked the docs and tutorials but I'm still in the dark. How would you implement fminsearch here? Thanks in advance.
MIN=1e10;
up_vec= u_min1+ ku*lambda;
vp_vec= v_min1+ kv*lambda;
wp_vec= w_min1+ kw*lambda;
%%the loop
for i_up=1:length(up_vec)
for i_vp=1:length(vp_vec)
for i_wp=1:length(wp_vec)
Jp(i_up,i_vp,i_wp)=norm(p- (A\[up_vec(i_up);vp_vec(i_vp);wp_vec(i_wp)]).* ...
[exp(-1i*2*pi/lambda*up_vec(i_up));...
exp(-1i*2*pi/lambda*vp_vec(i_vp));...
exp(-1i*2*pi/lambda*wp_vec(i_wp))]);
if Jp(i_up,i_vp,i_wp) < MIN
MIN=Jp(i_up,i_vp,i_wp);
ind_umin = i_up;
ind_vmin = i_vp;
ind_wmin = i_wp;
up_vec_min=up_vec;
vp_vec_min=vp_vec;
wp_vec_min=wp_vec;
pp_min=pp;
end
end
end
end
You need to define your objective function and then use fminsearch. for instance:
funJp = #(u,v,w)(norm(p- (A\[u;v;w]).* ...
[exp(-1i*2*pi/lambda*u);...
exp(-1i*2*pi/lambda*v);...
exp(-1i*2*pi/lambda*w)]));
x = fminsearch(funJp,[umin_1,vmin_1,wmin_1]);