Now I tried to make like a translator through Roblox Studio using Https service by sending a request to the translate.google.com link the thing is that anything I get in return does not give me the translated text.
I put what I received in a google doc and tried to find it by pressing ctrl + f to try to find it but no luck the only thing I could find is that text that was supposed to be translated. Here is the code in case you want to try it for yourself but I do warn you that running this might make Roblox unresponsive for a while since it is a lot of info they gave back.
I don't know if I am doing something wrong or not someone please help! I just want it to give me what 'Hello world' would be in french, there are also no error messages.
local http = game:GetService("HttpService")
local Message = "Hello world"
http:UrlEncode(Message) -- 'Hello world' -> 'Hello%20world'
local response = http:RequestAsync(
{
Url = "https://translate.google.com/?sl=en&tl=fr&text=" .. Message .. "!&op=translate";
Method = "GET"
}
)
if response.Success then
print(response.StatusMessage)
print(response.StatusCode)
print(response.Body)
--print(response.Headers)
else
print("The request failed: ", response.StatusCode, response.StatusMessage)
end
When visiting on your browser (for example) the url https://translate.google.com/?sl=en&tl=fr&text=Hello%20World!&op=translate, the translation you see is fetched using Javascript code executed by the browser after loading the page.
The browser retrieves the html body of the page (like you did in your code) and then executes the javascript in the html body which retrieves the translation and updates the page.
Unless you use a browser driver like Selenium I don't see how you can do what you want in a simple way.
Plus, I'm sure that Google has some protection against automatic bots, so after too many request your program will probably will be blocked by ReCaptcha.
The correct way to translate the text is to use the Google Cloud Translate API which I think is free up to 500k requests per month. There is also Azure Translator from Microsoft which also has a free tier.
Your issue is likely in how you are URL Encoding the string.
http:UrlEncode(Message)
HttpService.UrlEncode returns the encoded string as a new value. It doesn't mutate the existing value. So you just need to store the result of the function call.
Message = http:UrlEncode(Message)
EDIT : Just as #Mohamed AMAZIRH pointed out, hitting this URL will only return HTML.
Related
I am currently developing a Ruby API based on Sinatra. This API mostly receives GET requests from an existing social platform which supports external API integration.
The social platform fires off GET requests in the following format (only relevant parameters shown):
GET /{command}
Parameters: command and text
Where text is a string that the user has entered.
In my case, params[:text] is in fact a series of commands, delimited by a space. What I want to achieve is, for example: If params[:text]="corporate finance"
Then I want my API to interpret the request as a GET request to
/{command}/corporate/finance
instead of requesting /{command} with a string as a parameter containing the rest of the request.
Can this be achieved on my side? Nothing can be changed in terms of the initial request from the social platform.
EDIT: I think a better way of explaining what I am trying to achieve is the following:
GET /list?text=corporate finance
Should hit the same endpoint/route as
GET /list/corporate/finance
This must not affect the initial GET request from the social platform as it expects a response containing text to display to the user. Is there a neat, best practice way of doing this?
get "/" do {
text = params[:text].split.join "/"
redirect "#{params[:command]}/#{text}"
end
might do the trick. Didn't check though.
EDIT: ok, the before filter was stupid. Basically you could also route to "/" and then redirect. Or, even better:
get "/:command" do {
text = params[:text].split.join "/"
redirect "#{params[:command]}/#{text}"
}
There a many possible ways of achieving this. You should check the routes section of the sinatra docs (https://github.com/sinatra/sinatra)
The answer by three should do the trick, and to get around the fact that the filter will be invoked with every request, a conditional like this should do:
before do
if params[:text]
sub_commands = params[:text].split.join "/"
redirect "#{params[:command]}/#{sub_commands}"
end
end
I have tested it in a demo application and it seems to work fine.
The solution was to use the call! method.
I used a regular expression to intercept calls which match /something with no further parameters (i.e. /something/something else). I think this step can be done more elegantly.
From there, I split up my commands:
get %r{^\/\w+$} do
params[:text] ? sub_commands="/"+params[:text].split.join("/") : sub_commands=""
status, headers, body = call! env.merge("PATH_INFO" => "/#{params[:command]}#{sub_commands}")
[status, headers, body]
end
This achieves exactly what I needed, as it activates the correct endpoint, as if the URL was typed it the usual format i.e. /command/subcommand1/subcommand2 etc.
Sorry, I completely misunderstood your question, so I replace my answer with this:
require 'sinatra'
get '/list/?*' do
"yep"
end
like this, the following routes all lead to the same
You need to add a routine for each command or replace the command with a * and depend your output based on a case when.
The params entered by the user can be referred by the params hash.
http://localhost:4567/list
http://localhost:4567/list/corporate/finance
http://localhost:4567/list?text=corporate/finance
I need help understanding AJAX. I am going through the tutorial on W3C schools ( creating a button that opens text file on the server and displays the result in a div)
One part of the tutorials seems abstract to me, without sufficient explanation. I am sure its a pre-requisite that I have missed or not aware of, detailing below
To avoid getting a cached result in response to an XMLHttpRequest made to the server, the tutorial says one needs to ADD A UNIQUE ID to the URL parameter in the XMLHttp Open Method which has been done (in the tutorial) by adding a ?, another character (t) and an = after the file extention followed by joining a random number to the URL (using Math.random()). See code below.
A simple GET request would be like:
xmlhttp.open("GET","demo_get.asp,true); \\I can understand this
Unique ID added to URL
xmlhttp.open("GET","demo_get.asp?t=" + Math.random(),true); \\ I can't undersatnd this
'?' , 't' & a random number generator added to demo_get.asp - Why T, why not P Q R Z ?? Why "?" after .asp
Should the compiler not go bonkers and report an error if arbitary characters are added to the file location. How is the part of the URL after the file extention handled as in this case ?t= + Math.random()
This has been a case of much agony and frustration for the last 3 days cause I don't get which part of JS i have missed here, what do you call this concept and where can I read it ??
This apart, specifying message headers while sending data - What are HTTP headers and what do they mean. How do I decide what the parameters of the setRequestHeader() method shall be ?
Please help. Rest of Ajax is clear to me.
(I haven't read on the second part - the message headers. I have asked that query here to avoiding posting another question later, just in case it turns out to be as eluding and enigmatic as the UNIQUE ID concept - Apologies in case its a direct simple question I ought to read up myself)
Cache compares the requested URL with those present with it, if a unique id is added to the URL, it does not match and the browser treats it as a fresh GET request, which then is forwarded to the server. This is a standard way to bypass / disable browser caching.
Please refer this document for a better understanding of browser Caching.
See Page No 4, which explains the same thing as stated above.
http://www.f5.com/pdf/white-papers/browser-behavior-wp.pdf
How would you use Ruby to open a website and do a search in the search field and then parse the results? For example if I entered something into a search engine and then parsed the results page. I know how to use Nokogiri to find the webpage and open it. I am lost on how to input into the search field and moving forward to the results. Also on the page that I am actually searching I have to click on enter, I can't simply hit enter to move forward. Thank you so much for your help.
Use Mechanize - a library used for automating interaction with websites.
Something like mechanize will work, but interacting with the front end UI code is always going to be slower and more problematic than making requests directly against the back end.
Your best bet would be to look at the request that is being made to the server (probably a HTTP GET or POST request with some associated params). You can do this with firebug or Fiddler 2 for windows. Then, once you know the parameters that the server will accept, just make the request yourself.
For example, if you were doing this with the duckduckgo.com search engine, you could either get mechanize to go to duckduckgo.com, input text into the search box, and click submit, or you could just create a GET request to http://www.duckduckgo.com?q=search_term_here.
You can use Mechanize for something like this but it might be overkill. I would take a look at RestClient, especially if you don't need to manage cookies.
Edit:
If you can determine the specific URL that the form submits to, say for example 'example.com/search'; and you knew the request was a POST (which it usually is if you are submitting a form) you could construct something like this with mechanize:
agent = Mechanize.new
agent.post 'http://example.com/search', {
"_id0:Number" => string_to_search_for,
"_id0:submitButton" => "Enter"
}
Notice how the 'name' attribute of a form element becomes a key for the post and the 'value' element becomes the value. The 'input' element gets the value directly from the text you would have entered. This gets transformed into a request and submitted to the server when you push the submit button (of course in this case you are making the request directly). The result of the post should be some HTML that you can parse for the info you need.
I'm trying to figure out whether or not a user likes our brand page. Based off of that, we want to show either a like button or some 'thank you' text.
I'm working with a sinatra application hosted on heroku.
I tried the code from this thread: Decoding Facebook's signed request in Ruby/Sinatra
However, it doesn't seem to grab the signed_request and I can't figure out why.
I have the following methods:
get "/tab" do
#encoded_request = params[:signed_request]
#json_request = decode_data(#encoded_request)
#signed_request = Crack::JSON.parse(#json_request)
erb :index
end
# used by Canvas apps - redirect the POST to be a regular GET
post "/tab" do
#encoded_request = params[:signed_request]
#json_request = decode_data(#encoded_request)
#signed_request = Crack::JSON.parse(#json_request)
redirect '/tab'
end
I also have the helper messages from that thread, as they seem to make sense to me:
helpers do
def base64_url_decode(payload)
encoded_str = payload.gsub('-','+').gsub('_','/')
encoded_str += '=' while !(encoded_str.size % 4).zero?
Base64.decode64(encoded_str)
end
def decode_data(signed_request)
payload = signed_request.split('.')
data = base64_url_decode(payload)
end
end
However, when I just do
#encoded_request = params[:signed_request]
and read that out in my view with:
<%= #encoded_request %>
I get nothing at all.
Shouldn't this return at least something? My app seems to be crashing because well, there's nothing to be decoded.
I can't seem to find a lot of information about this around the internet so I'd be glad if someone could help me out.
Are there better ways to know whether or not a user likes our page? Or, is this the way to go and am I just overlooking something obvious?
Thanks!
The hint should be in your app crashing because there's nothing to decode.
I suspect the parameters get lost when redirecting. Think about it at the HTTP level:
The client posts to /tab with the signed_request in the params.
The app parses the signed_request and stores the result in instance variables.
The app redirects to /tab, i.e. sends a response with code 302 (or similar) and a Location header pointing to /tab. This completes the request/response cycle and the instance variables get discarded.
The client makes a new request: a GET to /tab. Because of the way redirects work, this will no longer have the params that were sent with the original POST.
The app tries to parse the signed_request param but crashes because no such param was sent.
The simplest solution would be to just render the template in response to the POST instead of redirecting.
If you really need to redirect, you need to carefully pass along the signed_request as query parameters in the redirect path. At least that's a solution I've used in the past. There may be simpler ways to solve this, or libraries that handle some of this for you.
We are using the Dynamic Script Tag with JsonP mechanism to achieve cross-domain Ajax calls. The front end widget is very simple. It just calls a search web service, passing search criteria supplied by the user and receiving and dynamically rendering the results.
Note - For those that aren’t familiar with the Dynamic Script Tag with JsonP method of performing Ajax-like requests to a service that return Json formatted data, I can explain how to utilise it if you think it could be relevant to the problem.
The service is WCF hosted on IIS. It is Restful so the first thing we do when the user clicks search is to generate a Url containing the criteria. It looks like this...
https://.../service.svc?criteria=john+smith
We then use a dynamically created Html Script Tag with the source attribute set to the above Url to make the request to our service. The result is returned and we process it to show the results.
This all works fine, but we noticed that when using IE the service receives the request from the client Twice. I used Fiddler to monitor the traffic leaving the browser and sure enough I see two requests with the following urls...
Request 1: https://.../service.svc?criteria=john+smith
Request 2: https://.../service.svc?criteria=john+smith&_=123456789
The second request has been appended with some kind of Id. This Id is different for every request.
My immediate thought is it was something to do with caching. Adding a random number to the end of the url is one of the classic approaches to disabling browser caching. To prove this I adjusted the cache settings in IE.
I set “Check for newer versions of stored pages” to “Never” – This resulted in only one request being made every time. The one with the random number on the end.
I set this setting value back to the default of “Automatic” and the requests immediately began to be sent twice again.
Interestingly I don’t receive both requests on the client. I found this reference where someone is suggesting this could be a bug with IE. The fact that this doesn’t happen for me on Firefox supports this theory.
Can anyone confirm if this is a bug with IE? It could be by design.
Does anyone know of a way I can stop it happening?
Some of the more vague searches that my users will run take up enough processing resource to make doubling up anything a very bad idea. I really want to avoid this if at all possible :-)
I just wrote an article on how to avoid caching of ajax requests :-)
It basically involves adding the no cache headers to any ajax request that comes in
public abstract class MyWebApplication : HttpApplication
{
protected MyWebApplication()
{
this.BeginRequest += new EventHandler(MyWebApplication_BeginRequest);
}
void MyWebApplication_BeginRequest(object sender, EventArgs e)
{
string requestedWith = this.Request.Headers["x-requested-with"];
if (!string.IsNullOrEmpty(requestedWith) && requestedWith.Equals(”XMLHttpRequest”, StringComparison.InvariantCultureIgnoreCase))
{
this.Response.Expires = 0;
this.Response.ExpiresAbsolute = DateTime.Now.AddDays(-1);
this.Response.AddHeader(”pragma”, “no-cache”);
this.Response.AddHeader(”cache-control”, “private”);
this.Response.CacheControl = “no-cache”;
}
}
}
I eventually established the reason for the duplicate requests. As I said, the mechanism I chose to use for making Ajax calls was with Dynamic Script Tags. I build the request Url, created a new Script element and assigned the Url to the src property...
var script = document.createElement(“script”);
script.src = https://....”;
Then to execute the script by appending it to the Document Head. Crucially, I was using the JQuery append function...
$(“head”).append(script);
Inside the append function JQuery was anticipating that I was trying to make an Ajax call. If the type of element being appended is a Script, then it executes a special routine that makes an Ajax request using the XmlHttpRequest object. But the script was still being appended to the document head, and being executed there by the browser too. Hence the double request.
The first came direct from the script – the one I intended to happen.
The second came from inside the JQuery append function. This was the request suffixed with the randomly generated query string argument in the form “&_=123456789”.
I simplified things by preventing the JQuery library side effect. I used the native append function...
document.getElementByTagName(“head”).appendChild(script);
One request now happens in the way I intended. I had no idea that the JQuery append function could have such a significant side effect built in.
See www.enhanceie.com/redir/?id=httpperf for further discussion.