Extracting CID from Reference or ID - google-places-api

Is it possible to extract the CID of a business from the Reference or the ID of the business? I'd rather not have to re-write all of my scripts to get the canonical URL of the business since I'm already getting the Reference and the ID.
I believe this question is the opposite of a question previously asked.
Thank you.

There is no direct way to do so. An indirect way is to do a place details request, which will return the cid in the url field.
To perform a place details call, follow their doc present at this address . It is very straightforward, albeit a bit annoying to have to perform an extra call. I don't know about you, but I have never been very fond of the Places API for that reason - too many requests for not enough data.

Related

Path Variable or Request parameter?

When we design Rest apis, it is said that use Path Variable when you need to identify a resource and Request Parameters when you need to do operations like sorting, filtering, searching, pagination. Let us take a scenario of Employee:
Employee has three fields like name, companyName, socialSecurityNo.
Now i want an Employee with a socialSecurityNo = ABC.
It seems fine to have endpoint with path variable like /employees/{socialSecurtityNo}, since we are identifying a resource.
Also it seems intuitive that we are filtering on the basis of socialSecurityNo and have an endpoint like /employees?socialSecurityNo=ABC
What will be the right way as i am confused and think that both apply.
It's a good question.
/employees?socialSecurityNo=ABC
is filtering all employees on socialSecurityNo. If socialSecurityNo is unique to an employee, there's no point in this endpoint existing and a client should use /employees/{socialSecurtityNo}.
There's nothing wrong with filtering on a unique field value (socialSecurtityNo) and if a client finds it easier to use this version (for whatever technical reason) then that's fine. There is no 'right' way. The ultimate reason APIs exist is to allow valuable work to be done by a client. Work with the client to allow that to happen but keep best practice in mind and know when the solution isn't the best but is the most practical in the situation.
I would expect to see:
/employees?surname=Smith
as this is filtering on a non unique field value and should return a collection of Employee objects.
The 'right' thing to do is keep the results consistent. If you have both ways of finding an employee, make sure the returned result is the same in each case.

how to implement Multi-Tenant functionality in asp.net-core

I have an Asp.net Core application I want to be able to allow multiple/ different Tenant(Client)to access the same application but using different url's. I have common database for all tenant(client).
So It is the main part I want to host my application in a domain say... www.myapplication.com then allow different Tenant(client) to access the same application using
1.www.TenantOne.myapplication.com
2.www.TenanatTwo.myapplication.com.
3.www.{TENANCY_NAME}.myapplication.com
I can't find any info on how to do this and I'm stuck.
How to do it? Please provide the code. Thanks.
As Saravanan suggested these types of questions don't belong here on SO. To get you started, I suggest you start looking if there are any frameworks such as SaaSKit available to add a multi tenancy layer to the pipeline.
The essential part is to know where each request comes from. Using subdomains is a good way to achieve that and middleware is a good place to 'identify' your tenant. You could have a database to persist the tenants but the implementation is entirely up to you. I also wrote a little article on the subject. Although it isn't ASP.NET Core, the principles still apply.
The approach I believe you are looking for is similar to the article at the url below.
https://dotnetthoughts.net/building-multi-tenant-web-apps-with-aspnet-core/
In it, the author splits the requesting URL into an array of strings delimited by the dot in the address. The variable 'subdomain' is then set to the first element of that array. In your question, it looks like you may want to use the second element in the array, but you get the idea.
var fullAddress = actionExecutingContext.HttpContext?.Request?
.Headers?["Host"].ToString()?.Split('.');
var subdomain = fullAddress[0];
//do something, get something, return something
How you use this data is up to you. The author of the article created a filter attribute, but there are many possibilities such as passing the tenant name as a parameter to a service function.
Sorry,you have to get something to start with and then come back for the people to help you with.
I would say that this is all of a domain based wild card mapping and change in your authentication logic to get the tenant id from the URL. Once you identified the tenant, you just login and then take it forward. Like you might be having a database with the tenant details like
tenant1 | tenant1.company.com | guid-ofthe-tenant | etc...
Once you get the URL, you lookup in the above table and get the tenant code and then you choose the login mode and then proceed.
In case you have tried something yet, we would be happy to point you if it does not work yet.

What does a Ajax call response like 'for (;;); { json data }' mean? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why do people put code like “throw 1; <dont be evil>” and “for(;;);” in front of json responses?
I found this kind of syntax being used on Facebook for Ajax calls. I'm confused on the for (;;); part in the beginning of response. What is it used for?
This is the call and response:
GET http://0.131.channel.facebook.com/x/1476579705/51033089/false/p_1524926084=0
Response:
for (;;);{"t":"continue"}
I suspect the primary reason it's there is control. It forces you to retrieve the data via Ajax, not via JSON-P or similar (which uses script tags, and so would fail because that for loop is infinite), and thus ensures that the Same Origin Policy kicks in. This lets them control what documents can issue calls to the API — specifically, only documents that have the same origin as that API call, or ones that Facebook specifically grants access to via CORS (on browsers that support CORS). So you have to request the data via a mechanism where the browser will enforce the SOP, and you have to know about that preface and remove it before deserializing the data.
So yeah, it's about controlling (useful) access to that data.
Facebook has a ton of developers working internally on a lot of projects, and it is very common for someone to make a minor mistake; whether it be something as simple and serious as failing to escape data inserted into an HTML or SQL template or something as intricate and subtle as using eval (sometimes inefficient and arguably insecure) or JSON.parse (a compliant but not universally implemented extension) instead of a "known good" JSON decoder, it is important to figure out ways to easily enforce best practices on this developer population.
To face this challenge, Facebook has recently been going "all out" with internal projects designed to gracefully enforce these best practices, and to be honest the only explanation that truly makes sense for this specific case is just that: someone internally decided that all JSON parsing should go through a single implementation in their core library, and the best way to enforce that is for every single API response to get for(;;); automatically tacked on the front.
In so doing, a developer can't be "lazy": they will notice immediately if they use eval(), wonder what is up, and then realize their mistake and use the approved JSON API.
The other answers being provided seem to all fall into one of two categories:
misunderstanding JSONP, or
misunderstanding "JSON hijacking".
Those in the first category rely on the idea that an attacker can somehow make a request "using JSONP" to an API that doesn't support it. JSONP is a protocol that must be supported on both the server and the client: it requires the server to return something akin to myFunction({"t":"continue"}) such that the result is passed to a local function. You can't just "use JSONP" by accident.
Those in the second category are citing a very real vulnerability that has been described allowing a cross-site request forgery via tags to APIs that do not use JSONP (such as this one), allowing a form of "JSON hijacking". This is done by changing the Array/Object constructor, which allows one to access the information being returned from the server without a wrapping function.
However, that is simply not possible in this case: the reason it works at all is that a bare array (one possible result of many JSON APIs, such as the famous Gmail example) is a valid expression statement, which is not true of a bare object.
In fact, the syntax for objects defined by JSON (which includes quotation marks around the field names, as seen in this example) conflicts with the syntax for blocks, and therefore cannot be used at the top-level of a script.
js> {"t":"continue"}
typein:2: SyntaxError: invalid label:
typein:2: {"t":"continue"}
typein:2: ....^
For this example to be exploitable by way of Object() constructor remapping, it would require the API to have instead returned the object inside of a set of parentheses, making it valid JavaScript (but then not valid JSON).
js> ({"t":"continue"})
[object Object]
Now, it could be that this for(;;); prefix trick is only "accidentally" showing up in this example, and is in fact being returned by other internal Facebook APIs that are returning arrays; but in this case that should really be noted, as that would then be the "real" cause for why for(;;); is appearing in this specific snippet.
Well the for(;;); is an infinite loop (you can use Chrome's JavaScript console to run that code in a tab if you want, and then watch the CPU-usage in the task manager go through the roof until the browser kills the tab).
So I suspect that maybe it is being put there to frustrate anyone attempting to parse the response using eval or any other technique that executes the returned data.
To explain further, it used to be fairly commonplace to parse a bit of JSON-formatted data using JavaScript's eval() function, by doing something like:
var parsedJson = eval('(' + jsonString + ')');
...this is considered unsafe, however, as if for some reason your JSON-formatted data contains executable JavaScript code instead of (or in addition to) JSON-formatted data then that code will be executed by the eval(). This means that if you are talking with an untrusted server, or if someone compromises a trusted server, then they can run arbitrary code on your page.
Because of this, using things like eval() to parse JSON-formatted data is generally frowned upon, and the for(;;); statement in the Facebook JSON will prevent people from parsing the data that way. Anyone that tries will get an infinite loop. So essentially, it's like Facebook is trying to enforce that people work with its API in a way that doesn't leave them vulnerable to future exploits that try to hijack the Facebook API to use as a vector.
I'm a bit late and T.J. has basically solved the mystery, but I thought I'd share a great paper on this particular topic that has good examples and provides deeper insight into this mechanism.
These infinite loops are a countermeasure against "Javascript hijacking", a type of attack that gained public attention with an attack on Gmail that was published by Jeremiah Grossman.
The idea is as simple as beautiful: A lot of users tend to be logged in permanently in Gmail or Facebook. So what you do is you set up a site and in your malicious site's Javascript you override the object or array constructor:
function Object() {
//Make an Ajax request to your malicious site exposing the object data
}
then you include a <script> tag in that site such as
<script src="http://www.example.com/object.json"></script>
And finally you can read all about the JSON objects in your malicious server's logs.
As promised, the link to the paper.
This looks like a hack to prevent a CSRF attack. There are browser-specific ways to hook into object creation, so a malicious website could use do that first, and then have the following:
<script src="http://0.131.channel.facebook.com/x/1476579705/51033089/false/p_1524926084=0" />
If there weren't an infinite loop before the JSON, an object would be created, since JSON can be eval()ed as javascript, and the hooks would detect it and sniff the object members.
Now if you visit that site from a browser, while logged into Facebook, it can get at your data as if it were you, and then send it back to its own server via e.g., an AJAX or javascript post.

Should hidden field information always be encrypted?

A question based on a comment made here:
storing user detail ... session vs cache !
Summary: I mentioned a technique I've used where I populate a model and use hidden fields to keep and pass back that information; Viewstate on the cheap. Simon Halsey said that the information should be encrypted or hashed so it is not tampered with. I'm thinking the added complexity of hashing it is just a form of YAGNI.
I can see that for sensitive information, definitely, but is this a good rule of thumb in general? What am I missing?
I actually have an attribute to do this (something similar) and speak about this exact thing in a security presentation. Yes - you should hash a copy of the value... encrypting it is up to you. if you encrypt it you get no model binding but is more open to attack, although a hash check helps. I'll post the code shortly for it and update this post. Who would ever think Viewstate helped with security : )
but to answer your question - you can encrypt it, but you need a way to at least validate it on the server side, so I hash a value and hash the posted value and then compare hashes in the attribute. encrypting can help - but then you need to implement either your own model binder or manually handle those values
The rule of thumb would be generally for any values that could be maliciously overwritten to attack your data - then you want some protection/validation on those fields. you could compare server side against what you know is a valid option for them (a form of whitelisting) but then you have the same form of rules duplicated on loading the data and on saving the data and that gets a bit messy at times, unless its as simple as limiting a user's get/update to a single userId.
What I mean is.. if you are updating say a user's record. Generally the main thing that matters for security is that the userId is not changed by the user to update a record that isn't theirs. The logic on get/save is easy "where o.UserId == userId"
However in complex role based security the logic becomes trickier and is not as clean to limit record updates like this. In those cases you can really take advantage of encrypted/hashed fields. I always hash the specific fields uses for update. Sure - they can be forged with other valid hashed fields from a previous request, but the scope of potential damage is significantly more limited this way.

Proper way of deleting records with Codeigniter

I came across another Stackoverflow post regarding Get vs Post and it made me think. With CI, my URL for deleting a record is http://domain.com/item/delete/100, which deletes record id 100 from my DB. The record_id is pulled via $this->uri->segment. In my model I do have a where clause that checks that the user is indeed the owner of that record. A user_id is stored in a session inside the DB. Is that good enough?
My understanding is, POST should be used for one time modification for data and GET is for retrieving regards (e.g. viewing an item or permalink).
You really ought to require a post request when deleting. In CodeIgniter this could be as simple as checking $this->input->post('confirm')
Part of the justification is you don't want data changed on a get request. Since you said you are requiring the person be the owner, there still is the problem that some one puts an image with the source being http://domain.com/item/delete/100 Using post isn't a cure-all as you can do post requests from javascript so it would still be possible for a malicious user to create the delete request if you aren't properly filtering input.
I should admit that I'm a bit of a purist and just feel requiring post is the right way. Its how the standards were written (okay you could argue it should be a DELETE request but browsers typically don't support them) and in other cases you really need to use them (there have been cases of web crawlers deleting pages).
If you want to have the delete link be http://domain.com/item/delete/100 then you could display a confirmation message with a form that does a post action as confirming the deletion.
I hope this helps,
Bill

Resources