Limit output length of crypt or hash - laravel

I was wondering if there is a way to limit the output of an encrypted or hashed value.
The case I have is, that I provide links for customers, having the id to an entry in my db, which contains relevant information for the receiver. Sooo... to avoid accessing to entries which are for someone else I am encrypting the id and append its outcome to the link. Now I am faced with the problem, that those "final links" are extremely long and ugly as f%!# (I actually got a lot responses, that they look highly suspicious and some of the customers didn't clicked on it, because they were afraid of being redirected to a phishing site).
However, this made me think about the option of limiting or individually setting the length of the encryptions outcome, like force it to contain 8 to 16 characters instead of about 250(? not sure how long they actually are). I also want to avoid using something like a redirecting page and a "self-made" URL shortener, because of the extra step I don't need.
Currently I've spent more than two hours of googling and reading several discussions regarding this topic and yeah... I am not satisfied with the results. Most of them started about two years up to five years ago.
What else I did?
I looked into Laravels api, especially into Illuminate\Encryption\Encrypter, but still no solution found.
Sooooo... I hope anyone can help me out with a solution based on laravel. I don't want to use anything else like php_mbcrypt itself than laravels encrypt or Hash::make.
Thanks in advance!

Related

Purely Laravel: I would like to generate a key (token) on the page and then tell users to insert the key in a form in order to process the form

Who is asking? - Question is coming from less than 6 months old PHP developer who fell completely in love with PHP due to its awesomeness, also I just joined STACKOVERFLOW today 7th Dec, 2019.
Reason for the question: I have a form which I have completely built and validated with Laravel but I want to protect it from spam not with recaptcha but with a pin (a kind of generated key). I've seen it used on various websites and I also want to apply it.
Plan of action: The generated code will be placed at the end of the form with an input field and on filling it, it must match the code generated on every page refresh. If it doesn't match, I want to kill the page or perhaps, display a page with a "WELL DONE" message.
My thoughts: I'm new here and maybe the question might have been asked before, but honestly I've been on the computer for over a week (spending at least 18 hours searching and searching) but no really understandable solution.
What I can't do: Because I'm using Laravel, I don't know where to start this functionality and how to end it.
My helper: You are reading this and I believe you have the skills and techniques to help me without sweating at all. Just imagine a friend whose head is floating but the body is already in the ocean and about to drown. Also imagine a friend who has only one shot (2 days) to change his life, and if not done, only God knows what's to come. PLEASE HELP ME!
To everyone: Forgive me for the long message, I just believe that if I can express myself deeply enough, someone out there will help me out.
Thank you to all the awesome developers around the world.

Making sure my Go page view counter isn't abused

I believe I have a found a very good and fast solution for efficiently counting page views:
Working example in go playground here: https://play.golang.org/p/q_mYEYLa1h
My idea is to push this to the database every X minutes, and after pushing a key then delete it from the page map.
My question now is, what would be the optimal way to ensure that this isn't abused? Ideally, I would only want to increase page count from the same person if there was a time interval of 2 hours since last visiting the page.
As far as I know, it would be ideal to store and compare both IP and user agent (I don't want to rely on cookie/localstorage), but I'm not quite sure how to efficiently store and compare this information.
I'd likely get both the IP (req.Header.Get("x-forwarded-for")) and UserAgent (req.UserAgent()) from http.Request.
I was thinking making a visitor struct similar to my page struct that would look like this:
type visitor struct {
mutex sync.Mutex
urlIPUAAndTime map[string]time
}
This way should make it possible to do something similar to before. However, imagine if the website had so many requests that there would be hundreds of millions of unique visitor maps being stored, and each of these could only be deleted after 2 (or more) hours. I therefore think this is not a good solution.
I guess it would be ideal/necessary to write to and read from some file, but not sure how this should be done efficiently. Help would be greatly appreciated
One of optimization ways is to add a Bloom filter before this map. Bloom filter is a probabilistic structure which can say one of these:
this user is definitely new
and this user possibly was here
This is a way to cut off computation on early stage. If many of your users are new then you save requests to database to check all of them.
What if structure says "user is possibly non-unique"? Then you go the database and check it.
Here's one more optimization: if you do not need very accurate information and can agree with mistake about several percent, you may use the sole bloom filter. I guess many large sites use this technique for estimation.

How to correlate the below given scenario for check boxes?

in my script i have a scenario like the page contains multiple check boxes for example 10, as per the user need user selects check boxes for example one user selects 4 check boxes and other user clicks 5 check boxes, so per each it will vary.
so how to correlate those values,
thanking you.
From the website: "Please don’t share your solutions, ask for help, or help others. This is meant to be a challenge."
So you appear to be violating one of the primary rules in this website. I have looked at this challenge and it's really good to gauge someone's knowledge.
However, to address technology generally - in reading your question I get the sense you may be missing certain fundamental knowledge for this kind of thing. Here's some fundamental knowledge. Hopefully my answer will help increase your knowledge. And hopefully you can use this increased general knowledge to address this specific question.
Definitions:
Correlation - you're taking data the SERVER sends to the browser, capturing it and sending it back. Information present on web pages would fit into this category.
Parameterization - you've got a set of values you'd like to put into web forms. This is usually values like names, addresses, etc
Also understand exactly what is happening when you conduct certain actions on your browser. When you "click" a checkbox does that actually send a message to a server? That usually doesn't (though not always) happen. So when you use phrases like 'click a checkbox' that tells me you may not appreciate the fact that performance testing is server focused, not browser focused.
Performance testing isn't intuitive so you need to understand these concepts. If you dedicate time to understanding the concepts I've outlined above you'll have the knowledge to complete the challenge.
Good luck.
What is driving the variation on check boxes being checked? Is it the result of something that comes back from the server, from a previous request? Or is it somewhat random based on whatever the user wants to do at runtime?

Getting a user's photos from 3rd parties efficiently

Let's say you have an app where your user will authenticate with Picasa and Facebook in order for you to get all of the photos they have posted. To simply get all of a user's photos, both FB and Picasa require the same approach:
Get a list of albums for the user
Get a list of pictures for each album
So for any given provider with this approach, you are required to make N + 1 (N being number of albums) requests to the 3rd party. If you are doing a couple of these operations at once this seems like it would get preventably slow.
There seem to be a few alternatives to this approach:
Facebook:
Get all photos related to user
Parse these to find which were actually posted by the user
This will also give you other users' photos tagged with your user, so it may potentially end up performing worse than the original method due to sheer size of data as well as the number of request for paging involved.
Picasa:
There's a potential workaround here:
Get all photos from Picasa by person
That would probably work but seems hacky, i.e. what is a very high value that satisfies the allowable range but can still be guaranteed to be larger than the number of photos for the user.
I know this is not going to be fast no matter which route I go, but does anyone have suggestions on what I should do here? There's also always the possibility that I'm looking at it completely wrong too.
I suggest you use FQL->
http://developers.facebook.com/docs/reference/fql/photo/
and
http://developers.facebook.com/docs/reference/fql/photo_tag/
It allows you to make one big query and facebook process it on their end, you can tweak it so it returns to you a list of pictures where user is tagged in for example.
I'm sorry I can't help with Picasa though, I never worked with it.

Quova API anyone? it's a complete disaster, anyone knows better ip-geolocation-services?

i need a geolocation service and i wanted to try some of them before buying anything for my client.
i tried api.ipinfodb.com and is pretty good...
then i recently tried Quova APi, that as far as i remeber Quova was considered good...
well...i tried it and the result is really sloppy... the zipcode with ipinfodb.com was perfect, whereas Quova was quite distant...
also the XML of the first was good formatted, whereas Quova gives you all lowercase name..why? shouldn be the city name cpaitalize ? i know i can do it with php but with name syou have to be careful...sound sjust sloppy to me...
I wonder if the paid Quova service is the same..
I'm actually the product manager for Quova, so I hope I can help. Sorry you're having problems with the API.
To answer your first question about the zip code, no vendor can be right 100% of the time, and there will always be individual cases where we are wrong and someone else is right, or we are right and someone else is wrong. We do provide confidence factors to help you decide how confident we are in the assignments we make, which helps customers make better decisions about the data. Our customers stay with us because they know that the overall quality of our data outperforms the other vendors they've tried. If you respond with the actual IP addresses and ZIP codes that you think are wrong, I can have them investigated.
With regard to our data being all lowercase, we made that decision a long time ago to make the data predictable and to make comparisions with our data easier. I know there are use cases where having the correct capitalization of place names would be valuable, and lowercasing strings is easy enough if you have to do that, so we're considering how to provide capitalized names without impacting current customers who might be relying on the data in its current format. One thing you can do in the meantime is use the Lat/Long to lookup the place name with a service like geonames.org.
To answer your last question, yes, the data is also lowercase in the commercial service.

Resources