i want to make a link that is valid only for 24 hours, this is for a validation purpose, so my question is simple:
How do i make this link valid only for this time; i've a hint:
Get the epoch time.
Make a link using only this value: something.com/time/1359380374
When the user clics on the link, extract this value and compare.
I hear about Hash values? why? we cant get the time from the hash value (invert the process) so how this is done?
Your best bet is to have the users email send as an argument and then query the database to see if their link has expired:
Requested link query: update users set locked_stamp = now();
Request url: http://yourdomain.com/?email=useremail
Query: select true from users where email = '$email' and locked_stamp > now()-interval 1 hour and now() limit 1
Result: You have a person requesting within the hour with email: $email.
I have a script that using base64 to encode the timestamp... but its not secure by any means.
import tornado.web
import base64, re, time
import sys
def get_time():
"""Method used to get the current time in b64"""
return base64.b64encode(str(int(time.mktime(time.localtime()))))
class WebHandler(tornado.web.RequestHandler):
def get(self, _time):
timecheck = base64.b64decode(_time)
try:
#require it to be all digits
assert re.match('^\d+$', timecheck) is not None
# Must be within 1 hour: greater then 1 hour ago and less then now
assert int(timecheck) > int(time.mktime(time.localtime()))-3600 and \
int(timecheck) < int(time.mktime(time.localtime()))
except AssertionError:
raise tornado.web.HTTPError(401,'Woops! Unauthorized.')
else:
self.write('Pass')
# Route
application = tornado.web.Application([
(r"/([^\/]+)/?", WebHandler),
])
if __name__ == "__main__":
application.listen(8889)
tornado.ioloop.IOLoop.instance().start()
the same way it sets secure cookies:
signed_message = self.create_signed_value(secret, name, value)
Then you can check it:
message = self.decode_signed_value(secret, name, value, max_age_days=31, clock=None,min_version=None)
Secret should be a long random number, but you only need one per app. min_version could be DEFAULT_SIGNED_VALUE_VERSION (which is currently 2).
Don't roll your own solution. Use the one in the library. It's there. It works.
Related
I am using SQLAlchemy to query the database from my Flask web-application using engine.After I do the SELECT Query and also do use fetchall object after ResultProxy is returned which ultimately returns RowProxy object and then I store in session.
Here is my code:
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from flask import Flask, session
engine = create_engine(os.environ.get('DATABASE_URL'))
db = scoped_session(sessionmaker(bind=engine))
app = Flask(__name__)
app.secret_key = os.environ.get('SECRET_KEY')
#app.route('/')
def index():
session['list'] = db.execute("SELECT title,author,year FROM books WHERE year = 2011 LIMIT 4").fetchall()
print(session['list'])
return "<h1>hello world</h1>"
if __name__ == "__main__":
app.run(debug = True)
Here is the output:
[('Steve Jobs', 'Walter Isaacson', 2011), ('Legend', 'Marie Lu', 2011), ('Hit List', 'Laurell K. Hamilton', 2011), ('Born at Midnight', 'C.C. Hunter', 2011)]
Traceback (most recent call last):
File "C:\Users\avise\AppData\Local\Programs\Python\Python38\Lib\site-packages\flask\app.py", line 2463, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\avise\AppData\Local\Programs\Python\Python38\Lib\site-packages\flask\app.py", line 2449, in wsgi_app
response = self.handle_exception(e)
File "C:\Users\avise\AppData\Local\Programs\Python\Python38\Lib\site-packages\flask\app.py", line 1866, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\avise\AppData\Local\Programs\Python\Python38\Lib\json\encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type RowProxy is not JSON serializable
The session item stores the data as i can see in output.But "hello world" is not rendered.
And if i replace the session variable by ordinary variable say x then it seems to be working.
But i think i need to use sessions so that my application will be used simultaneously by to users to display different things. So, how could i use sessions in this case or is there any other way?
Any help will be appreciated as I am new to Flask and web-development.
From what I understand about the Flask Session object is that it acts as a python dictionary; however values must be JSON serializable. In this case, just like the error suggests, the RowProxy object that is being returned by fetch all is not json serializable.
A solution to this problem would be to instead pass through a result of your query as a dictionary (which is JSON serializable).
It looks like the result of your query is returning a list of tuples so we can do the following:
res = db.execute("SELECT title,author,year FROM books WHERE year = 2011 LIMIT 4").fetchall()
user_books = {}
index = 0
for entry in res:
user_books[index] = {'title':res[index][0],
'author':res[index][1],
'year':res[index][2],
}
index += 1
session['list'] = user_books
A word of caution; however, is that since we are using the title of the book as a key, if there are two books with the same title, information may be overwritten, so consider using a unique id as the key.
Also note that the dictionary construction above would only work for the query you already have - if you added another column to the select statement you would have to edit the code to include the extra column information.
Using Discord.py-rewrite, How can we diagnose my_background_task to find the reason why its print statement is not printing every 3 seconds?
Details:
The problem that I am observing is that "print('inside loop')" is printed once in my logs, but not the expected 'every three seconds'. Could there be an exception somewhere that I am not catching?
Note: I do see print(f'Logged in as {bot.user.name} - {bot.user.id}') in the logs so on_ready seems to work, so that method cannot be to blame.
I tried following this example: https://github.com/Rapptz/discord.py/blob/async/examples/background_task.py
however I did not use its client = discord.Client() statement because I think I can achieve the same using "bot" similar to as explained here https://stackoverflow.com/a/53136140/6200445
import asyncio
import discord
from discord.ext import commands
token = open("token.txt", "r").read()
def get_prefix(client, message):
prefixes = ['=', '==']
if not message.guild:
prefixes = ['=='] # Only allow '==' as a prefix when in DMs, this is optional
# Allow users to #mention the bot instead of using a prefix when using a command. Also optional
# Do `return prefixes` if u don't want to allow mentions instead of prefix.
return commands.when_mentioned_or(*prefixes)(client, message)
bot = commands.Bot( # Create a new bot
command_prefix=get_prefix, # Set the prefix
description='A bot for doing cool things. Commands list:', # description for the bot
case_insensitive=True # Make the commands case insensitive
)
# case_insensitive=True is used as the commands are case sensitive by default
cogs = ['cogs.basic','cogs.embed']
#bot.event
async def on_ready(): # Do this when the bot is logged in
print(f'Logged in as {bot.user.name} - {bot.user.id}') # Print the name and ID of the bot logged in.
for cog in cogs:
bot.load_extension(cog)
return
async def my_background_task():
await bot.wait_until_ready()
print('inside loop') # This prints one time. How to make it print every 3 seconds?
counter = 0
while not bot.is_closed:
counter += 1
await bot.send_message(channel, counter)
await channel.send(counter)
await asyncio.sleep(3) # task runs every 3 seconds
bot.loop.create_task(my_background_task())
bot.run(token)
[]
From a cursory inspection, it would seem your problem is that you are only calling it once. Your method my_background_task is not called once every three seconds. It is instead your send_message method that is called once every three seconds. For intended behavior, place the print statement inside your while loop.
Although I am using rewrite, I found both of these resources helpful.
https://github.com/Rapptz/discord.py/blob/async/examples/background_task.py
https://github.com/Rapptz/discord.py/blob/rewrite/examples/background_task.py
I am trying to integrate QnAmaker knowledge base with Azure Bot Service.
I am unable to find knowledge base id on QnAMaker portal.
How to find the kbid in QnAPortal?
The Knowledge Base Id can be located in Settings under “Deployment details” in your knowledge base. It is the guid that is nestled between “knowledgebases” and “generateAnswer” in the POST (see image below).
Hope of help!
Hey you can also use python to get this by take a look at the following code.
That is if you wanted to write a program to dynamically get the kb ids.
import http.client, os, urllib.parse, json, time, sys
# Represents the various elements used to create HTTP request path for QnA Maker
operations.
# Replace this with a valid subscription key.
# User host = '<your-resource-name>.cognitiveservices.azure.com'
host = '<your-resource-name>.cognitiveservices.azure.com'
subscription_key = '<QnA-Key>'
get_kb_method = '/qnamaker/v4.0/knowledgebases/'
try:
headers = {
'Ocp-Apim-Subscription-Key': subscription_key,
'Content-Type': 'application/json'
}
conn = http.client.HTTPSConnection(host)
conn.request ("GET", get_kb_method, None, headers)
response = conn.getresponse()
data = response.read().decode("UTF-8")
result = None
if len(data) > 0:
result = json.loads(data)
print
#print(json.dumps(result, sort_keys=True, indent=2))
# Note status code 204 means success.
KB_id = result["knowledgebases"][0]["id"]
print(response.status)
print(KB_id)
except :
print ("Unexpected error:", sys.exc_info()[0])
print ("Unexpected error:", sys.exc_info()[1])
I'm trying to verify a link that will expire in a week. I have an activator_token stored in the database, which will be used to generate the link in this format: http://www.example.com/activator_token. (And not activation tokens generated by Devise or Authlogic.)
Is there a way to make this activator token expire (in a week or so) without comparing with updated_at or some other date. Something like an encoded token, which will return nil when decoded after a week. Can any existing modules in Ruby do this? I don't want to store the generated date in the database or in an external store like Redis and compare it with Time.now. I want it to be very simple, and wanted to know if something like this already exists, before writing the logic again.
What you want to use is: https://github.com/jwt/ruby-jwt .
Here is some boilerplate code so you can try it out yourself.
require 'jwt'
# generate your keys when deploying your app.
# Doing so using a rake task might be a good idea
# How to persist and load the keys is up to you!
rsa_private = OpenSSL::PKey::RSA.generate 2048
rsa_public = rsa_private.public_key
# do this when you are about to send the email
exp = Time.now.to_i + 4 * 3600
payload = {exp: exp, discount: '9.99', email: 'user#example.com'}
# when generating an invite email, this is the token you want to incorporate in
# your link as a parameter
token = JWT.encode payload, rsa_private, 'RS256'
puts token
puts token.length
# this goes into your controller
begin
#token = params[:token]
decoded_token = JWT.decode token, rsa_public, true, { :algorithm => 'RS256' }
puts decoded_token.first
# continue with your business logic
rescue JWT::ExpiredSignature
# Handle expired token
# inform the user his invite link has expired!
puts "Token expired"
end
I use Github API V3 to get forks count for a repository, i use:
GET /repos/:owner/:repo/forks
The request bring me only 30 results even if a repository contain more, I googled a little and I found that due to the memory restrict the API return only 30 results per page, and if I want next results I have to specify the number of page.
Only me I don't need all this information, all I need is the number of forks.
Is there any way to get only the number of forks?
Because If I start to loop page per page my script risque to crash if a repository contain thousand results.
You can try and use a search query.
For instance, for my repo VonC/b2d, I would use:
https://api.github.com/search/repositories?q=user%3AVonC+repo%3Ab2d+b2d
The json answer gives me a "forks_count": 5
Here is one with more than 4000 forks (consider only the first result, meaning the one whose "full_name" is actually "strongloop/express")
https://api.github.com/search/repositories?q=user%3Astrongloop+repo%3Aexpress+express
"forks_count": 4114,
I had a job where I need to get all forks as git-remotes of a github project.
I wrote the simple python script https://gist.github.com/urpylka/9a404991b28aeff006a34fb64da12de4
At the base of the program is recursion function for getting forks of a fork. And I met same problem (GitHub API was returning me only 30 items).
I solved it with add increment of ?page=1 and add check for null response from server.
def get_fork(username, repo, forks, auth=None):
page = 1
while 1:
r = None
request = "https://api.github.com/repos/{}/{}/forks?page={}".format(username, repo, page)
if auth is None: r = requests.get(request)
else: r = requests.get(request, auth=(auth['login'], auth['secret']))
j = r.json()
r.close()
if 'message' in j:
print("username: {}, repo: {}".format(username, repo))
print(j['message'] + " " + j['documentation_url'])
if str(j['message']) == "Not Found": break
else: exit(1)
if len(j) == 0: break
else: page += 1
for item in j:
forks.append({'user': item['owner']['login'], 'repo': item['name']})
if auth is None:
get_fork(item['owner']['login'], item['name'], forks)
else:
get_fork(item['owner']['login'], item['name'], forks, auth)