Authentication Failure when using external JWT token in SurrealDB - surrealdb

Can anyone help me setup authentication using external jwt token
So far i have tried multiple variants of the following.
First i define the token using
DEFINE TOKEN my_token ON DATABASE TYPE HS512 VALUE '1234567890';
Then i generate a token using the above '1234567890' and following header fields.
{
"alg": "HS512",
"typ": "JWT",
"NS": "help",
"DB": "help",
"TK": "my_token"
}
Note: i have also tried defining the "NS","DB","TK" fields in the Payload section of token.
Then i try to authenticate using the token in JS client and http request with Bearer authorization header.
db.authenticate("eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCIsIk5TIjoiaGVscCIsIkRCIjoiaGVscCIsIlRLIjoibXlfdG9rZW4ifQ.e30.uoJypJ-Y9OrZjQW6WtuZWmFYBEOCHlkutbR6mlEYPCHvb49h9nFiWshKDc464MD3jaBh69T1OLwZ2aUWNujiuw")
Getting error on both Js client and Http Request
name: "AuthenticationError"
message: "There was a problem with authentication"
stack: "AuthenticationError: There was a problem with authentication\n at Surreal.

This answer ended up being more comprehensive than originally intended. As such, here's a list of contents to help you find what you're looking for. Unfortunately it seems to be impossible to convert them to links. (Sorry)
Table of Contents
Composing a JWT Token
Parts of a Token
Token Header
Token Payload
Token Signature
Encoding The Token
Example: Step-by-step
Using NodeJs
SurrealDB Token Authentication
Defining a Token Handler
Using The Token We Made
Using Public Key Cryptography
SurrealDB Permissions
Token Types
Namespace Token
Database Token
Scope Token
Table Permissions
FULL: Available to Query Without Any Authentication
NONE: Restricted Tables (Implicit Default)
Granular Table Permissions
Granular Field Permissions
Accessing Token & Auth Data from Queries
Further Reading
Composing a JWT Token
Now we need to generate a token to test it with. A Json Web Token (JWT), as you may know, consists of three parts: The header, the payload, and the signature. It's base64url encoded (a form of base64 encoding that uses characters safe to use in a web address or hyperlink).
Parts of a token
Token Header
The header describes to the verifying party, in this case SurrealDB, what kind of token it is and what algorithm it uses. Let's create that:
{
"alg": "HS512",
"typ": "JWT",
}
Token Payload
Now, the payload is the fun part.
For use with SurrealDB, there are a number of fields which determine how the database will process the token.
The types of token allowed by SurrealDB as of version surreal-1.0.0-beta.8 are as follows:
scope token authentication: (ns, db, sc, tk [, id])
database token authentication: (ns, db, tk)
namespace token authentication: (ns, tk)
For details, see:
Token Verification Logic - SurrealDB - GitHub
The listed fields are names of:
ns :string Namespace
db :string Database
sc :string Scope
tk :string Token
id ?:string Thing (table row) representing a user (optional)
There are also a number of publicly registered field names with various meanings - relevant in case you want interoperability or standardisation; less so for simply working with SurrealDB. You can put any serialisable data you want into the payload. Keep in mind, however, that that data will be sent many times over the network so it's worth keeping it short.
If you're curious:
List of publicly registered JWT fields - maintained by IANA
Let's create a database token. When we registered it, we called it my_token so let's add that as our tk field, adding our db and ns as in your question. The fields are not case-sensitive as SurrealDB sees them, however they will be if you try to access the payload data directly later, as part of a permission or select query.
{
"ns": "help",
"db": "help",
"tk": "my_token",
"someOtherValue": "justToShowThatWeCan"
}
Token Signature
Once we have composed the header and payload, the last step in creating a token is to sign it.
The signature is composed by:
removing the whitespace of; and
base64url encoding the header and payload; then
concatenating them with a dot (period/full-stop) separating them.
The whole string is passed through the (in this case HMAC_SHA512) hashing algorithm along with the secret key, and then the result is base64url encoded to form the signature.
In case you're interested in more depth:
How HMAC combines the key with the data - Wikipedia
Let's see it in action:
Encoding The Token
Example: Step-by-step
The encoded header
eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9
The encoded payload
eyJucyI6ImhlbHAiLCJkYiI6ImhlbHAiLCJ0ayI6Im15X3Rva2VuIiwic29tZU90aGVyVmFsdWUiOiJqdXN0VG9TaG93VGhhdFdlQ2FuIn0
Concatenate separated by a dot
eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJucyI6ImhlbHAiLCJkYiI6ImhlbHAiLCJ0ayI6Im15X3Rva2VuIiwic29tZU90aGVyVmFsdWUiOiJqdXN0VG9TaG93VGhhdFdlQ2FuIn0
Hash the result, with the secret key to get:
8nBoXQQ_Up3HGKBB64cKekw906zES8GXa6QZYygYWD5GbFoLlcPe2RtMMSAzRrHHfGRsHz9F5hJ1CMfaDDy5AA
Append the key to the input, again with a dot
eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJucyI6ImhlbHAiLCJkYiI6ImhlbHAiLCJ0ayI6Im15X3Rva2VuIiwic29tZU90aGVyVmFsdWUiOiJqdXN0VG9TaG93VGhhdFdlQ2FuIn0.8nBoXQQ_Up3HGKBB64cKekw906zES8GXa6QZYygYWD5GbFoLlcPe2RtMMSAzRrHHfGRsHz9F5hJ1CMfaDDy5AA
And that's our complete token!
You can use jwt.io to play around with payloads, headers, and signature algorithms.
Using NodeJs
If you just want to encode tokens on Node.js, I would recommend the package: npm - jsonwebtoken
npm i jsonwebtoken
import jwt, { JwtPayload } from 'jsonwebtoken';
// Typescript Types
type signTokenFn = (payload:object, secretOrPrivateKey:string, options?:object) => Promise<string>;
type verifyTokenFn = (token:string, secretOrPublicKey:string, options?:object) => Promise<string | JwtPayload>;
// Let's make them await-able
const promisifyCallback = (resolve, reject) => (failure, success) => failure ? reject(failure) : resolve(success);
const signToken:signTokenFn = async (payload, secretOrPrivateKey, options = {}) => new Promise((resolve, reject) => {
jwt.sign(payload, secretOrPrivateKey, options, promisifyCallback(resolve, reject));
});
const verifyToken:verifyTokenFn = async (token, secretOrPublicKey, options = {}) => new Promise((resolve, reject) => {
jwt.verify(token, secretOrPublicKey, options, (err, decoded) => err ? reject(err) : resolve(decoded));
});
// The actual encoding/verifying
const secret = '0123456789';
const tokenPayload = {
ns: "help",
db: "help",
tk: "my_token",
someOtherValue: "justToShowThatWeCan"
};
const signedToken = await signToken(tokenPayload, secret, {
expiresIn: '10m' // Set any duration here ex: '24h'
});
const accessDecoded = await verifyToken(signedToken, secret)
SurrealDB Token Authentication
Defining a Token Handler
You're correct in your question about how to define the token handler, so let's do that:
DEFINE TOKEN my_token ON DATABASE TYPE HS512 VALUE '1234567890';
A token can be defined on a namespace (ns), database (db), or scope. The latter is as yet undocumented, as it's one of the recent commits to the codebase. See:
Commit (75d1e86) "Add DEFINE TOKEN … ON SCOPE … functionality" - SurrealDB on GitHub
Using The Token We Made
Using the vs-code REST client, we can test our token as such:
POST /sql HTTP/1.1
Host: localhost:8000
Content-Type: text/plain
Accept: application/json
Token: eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJucyI6ImhlbHAiLCJkYiI6ImhlbHAiLCJ0ayI6Im15X3Rva2VuIiwic29tZU90aGVyVmFsdWUiOiJqdXN0VG9TaG93VGhhdFdlQ2FuIn0.8nBoXQQ_Up3HGKBB64cKekw906zES8GXa6QZYygYWD5GbFoLlcPe2RtMMSAzRrHHfGRsHz9F5hJ1CMfaDDy5AA
NS: help
DB: help
SELECT * FROM myHelpTable
We should get a response like this:
HTTP/1.1 200 OK
content-type: application/json
version: surreal-1.0.0-beta.8+20220930.c246533
server: SurrealDB
content-length: 91
date: Tue, 03 Jan 2023 00:09:49 GMT
[
{
"time": "831.535µs",
"status": "OK",
"result": [
{
"id": "test:record"
},
{
"id": "test:record2"
}
]
}
]
Now that we know it's working, let's try it out with the javascript client library. (This is the same for Node.JS)
import Surreal from 'surrealdb.js';
const db = new Surreal('http://127.0.0.1:8000/rpc');
const NS = 'help';
const DB = 'help';
async function main() {
await db.authenticate('eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJucyI6ImhlbHAiLCJkYiI6ImhlbHAiLCJ0ayI6Im15X3Rva2VuIiwic29tZU90aGVyVmFsdWUiOiJqdXN0VG9TaG93VGhhdFdlQ2FuIn0.8nBoXQQ_Up3HGKBB64cKekw906zES8GXa6QZYygYWD5GbFoLlcPe2RtMMSAzRrHHfGRsHz9F5hJ1CMfaDDy5AA');
await db.use(NS, DB);
const result = await db.select('test');
console.log(result);
// [
// { id: 'test:record' },
// { id: 'test:record2' }
// ]
}
main();
Using Public Key Cryptography
If you want, you can also use a public/private key-pair to allow for verifying tokens without the need to share the secret needed to generate authentic tokens.
import crypto from 'node:crypto';
// Generate Fresh RSA Keys for Access Tokens on Startup
const { publicKey, privateKey } = crypto.generateKeyPairSync('rsa', {
modulusLength: 4096,
publicKeyEncoding: { type: 'spki', format: 'pem' },
privateKeyEncoding: { type: 'pkcs8', format: 'pem' },
});
async function main() {
// Add our public key to SurrealDB as the verifier
await db.query(`DEFINE TOKEN my_token ON DATABASE TYPE RS256 VALUE "${publicKey}";`).then(() =>
console.log('yay!');
}
main();
SurrealDB Permissions
As mentioned above, there are three types of tokens which can be defined and used to authenticate queries.
Token Types
Namespace Token
-- Will apply to the current namespace
DEFINE TOKEN #name ON NAMESPACE TYPE #algorithm VALUE #secretOrPublicKey;
-- Can also be abbreviated:
DEFINE TOKEN #name ON NS TYPE #algorithm VALUE #secretOrPublicKey;
Warning: Table and field permissions will not be processed when executing queries for namespace token bearers.
This type of token gives the authenticated user or system the ability to access the entire namespace on which the token is defined.
That includes select, create, update, and delete (SCUD) access to all tables in all databases, as well as the ability to define and remove databases and tables.
Database Token
-- Will apply to the current database
DEFINE TOKEN #name ON DATABASE TYPE #algorithm VALUE #secretOrPublicKey;
-- Can also be abbreviated:
DEFINE TOKEN #name ON DB TYPE #algorithm VALUE #secretOrPublicKey;
Warning: Table and field permissions will not be processed when executing queries for database token bearers.
This type of token gives the authenticated user or system the ability to access the entire database on which the token is defined.
That includes select, create, update, and delete (SCUD) access to all tables in the specific database, as well as the ability to define and remove tables.
Scope Token
-- Requires a defined scope on which to define the token; scope is defined as a property on the current database.
DEFINE SCOPE #name;
-- Define the token after we define the scope:
DEFINE TOKEN #name ON SCOPE #name TYPE #algorithm VALUE #secretOrPublicKey;
Table and field permissions will be processed as normal when executing queries for scope token bearers.
This type of token gives the authenticated user or system the ability to access the database on which the scope is defined, but only to the extent permitted by permissions defined for tables and fields.
That includes select, create, update, and delete (SCUD) access to all tables (permissions allowing) in the specific database, however scoped tokens may not create, modify, view info for, nor delete tables.
The optional id parameter in the payload allows a scope token to be linked to a table row. This could be for a user account, a client id for batch or automated systems, etc. The semantics is up to you. In table permissions, the id can be accessed via $token.id and the row pointed to can be accessed via $auth.
Table Permissions
FULL: Available to Query Without Any Authentication
DEFINE TABLE this_table_is_publicly_accessible;
When you define a table, note that if you do not define any permissions for it, the default is accessible to the public - ie without any kind of authentication.
Keep in mind that using strict mode, you will need to explicitly define your tables before you can use them. To avoid them being unintentionally made public, always set some kind of permission.
NONE: Restricted Tables (Implicit Default)
CREATE restricted:hello;
-- The above implicitly creates a table with this definition:
DEFINE TABLE restricted SCHEMALESS PERMISSIONS NONE;
If you leave a table undefined, but begin creating entries, thus implicitly creating the table, it is given a default set of permissions allowing no public access and no scoped access. Only database token bearers and namespace token bearers will be able to access the data.
Granular Table Permissions
DEFINE TABLE granular_access SCHEMALESS PERMISSIONS
FOR select FULL
FOR create,update WHERE $token.someOtherValue = "justToShowThatWeCan"
FOR delete NONE;
Here we allow public access to select from the table, while only allowing scope users with the "someOtherValue" in their token set to "justToShowThatWeCan" to create and update. Meanwhile nobody with a scoped token may delete. Only Database and Namespace type token bearers may now delete from the table.
Granular Field Permissions
DEFINE field more_granular ON TABLE granular_access PERMISSIONS
FOR select FULL
FOR create,update WHERE $token.someOtherValue = "justToShowThatWeCan"
FOR delete NONE;
Similar to full tables, permissions can also be set on single fields.
Accessing Token & Auth Data from Queries
The protected params $session, $scope, $token, and $auth contain extra information related to the client.
To see what data is available to access, try running the queries:
SELECT * FROM $session;
SELECT * FROM $token;
SELECT * FROM $scope;
SELECT * FROM $auth;
While using a namespace or database token, only the $session and $token parameters have values. Briefly:
$session an object which contains session data, most useful-seeming being $session.ip which shows the client ip and outgoing port of the connection to SurrealDB. example: 127.0.0.1:60497
$token makes available all of the fields present in the payload of the JWT token used to authenticate the session, as an object.
$scope seems to simply contain the name of the scope to which the user/client has access.
$auth is present when the scoped JWT also contains an id field, and contains the data from the table row specified by id. For example if id contains users:some_row_id then $auth will contain the row pointed to, if it exists, and if the scope has permission to access this row. Fields can be hidden from this object using permissions as well.
Further Reading
SurrealDB - Features page (contains code examples)
SurrealDB Docs - DEFINE statements
Note that the documentation does not, as of this writing, show the ability to define a token upon a scope. It is, however, valid and tested as shown.
SurrealDB Commit - defining token on scope
SurrealDB Codebase - parsing logic for DEFINE statements
SurrealDB Codebase - parsing logic for PERMISSIONS
SurrealDB Codebase - token verification logic
NPM Package - jsonwebtoken
JWT.io - JSON Web Tokens (encoder/decoder)
Wikipedia - HMAC

Related

Laravel 5.6 - Reset Password Tokens how to ensure they match?

When a user forgets their password and try to reset it, they get a reset password email link with a token:
site.com/my/password/reset/ddc3669ab1bbd78abe620ef910716ae91678bb4beb5cd8896e21efaaa0c9d5c6
On the backend though, the token in the database password_resets table looks like it's hashed:
$2y$10$O6pgU21FhsOcgpgeqR8RReFYoCGhHNBH5bHKxtE6q1tJMTUufrbr.
So when the route is visited, the only piece of identifying information passed is the token:
ddc3669ab1bbd78abe620ef910716ae91678bb4beb5cd8896e21efaaa0c9d5c6
// Controller Method
passwordResetVerifyByToken($token){
$record = DB::table('password_resets')->where('token', $token)
->first();
}
Ofcourse we won't get a record, as the plain token from the email will NOT match the hashed one in the database with the above query. So with the plain emailed token link, when the user clicks it, how can we compare it to the one in the database to verify it exists and is a matching token?
You should use the Hash::check method, which will return true or false depending of if the hash of the reset token matches the stored database value.
if (Hash::check($token, $row->token)) {
// The passwords match...
}
Laravel docs:
https://laravel.com/docs/5.6/hashing#basic-usage
Dont worry Laravel Have there own advanced function Hash you should try this
if (Hash::check($token, $row->token)) {
// write your code or other function
}

Migrating User to Cognito on Sign In

I am trying to migrate users to Cognito when they sign in the first time. For this I wrote a lambda function that does call an API to check if the users exist in db or not ? if the user exists, it will be created in cognito but I am not sure how do I tell the application that user is created and it should allow the user to login .
Here is the code in c#:
public async Task<Stream> FunctionHandlerAsync(Stream stream, ILambdaContext context)
{
RootObject rootObj = DeserializeStream(stream);
User user = new User(rootObj.userName, rootObj.request.password);
ApiResponse apiResponse = await MobileAuthenticateAsync(user.UserName, user.Password);
// Considering apiResponse returns "user authenticated", we create the user in //cognito. This is working.
// How do I send response back to Application so it knows that user is // //created and authenticated and should be allowed to login.
//Before returning stream, I am setting following 2 status.
rootObj.response.finalUserStatus = "CONFIRMED"; // is this correct ?
rootObj.response.messageAction = "SUPPRESS";
return SerializeToStream(rootObj);;
}
You're pretty close.
You can see the full documentation on the Migrate User Lambda Trigger page, however in short you need your response to look like:
{
response: {
userAttributes: {
email: 'user#example.com',
email_verified: true,
custom:myAttribute: 123,
},
finalUserStatus: 'CONFIRMED',
messageAction: 'SUPPRESS',
forceAliasCreation: false,
}
}
Where:
userAttribute: this is a dictionary/map of the user's attributes keys in cognito (note that any custom attributes need to be prefixed with custom:), to the values from the system you're migrating from. You do not need to provide all of these, although if you're using an email alias you may want to set email_verified: true to prevent the user having to re-verify their e-mail address.
finalUserStatus: if you set this to CONFIRMED then the user will not have to re-confirm their email address/phone number, which is probably a sensible default. If you are concerned that the password is given as plain-text to cognito this first-time, you can instead use RESET_REQUIRED to force them to change their password on first sign-in.
messageAction: should probably be SUPPRESS unless you want to send them a welcome email on migration.
forceAliasCreation: is important only if you're using email aliases, as it stops users who manage to sign-up into cognito being replaced on migration.
If you respond with this (keeping the rest of the original rootObj is convenient but not required then the user will migrated with attributes as specified.
If you throw (or fail to respond with the correct event shape) then the migration lambda fails and the user is told that they couldn't migrated. For example, because they do not exist in your old user database, or they haven't provided the right credentials.

Update viewer fields and connection on Store

I'm trying to update the values and connections on my current viewer within the Relay store.
So without calling the mutation signIn if I print:
console.log(viewer.name) // "Visitor"
console.log(viewer.is_anonymous) // true
on Mutations we got the method updater which gives us the store, so in my mutation I'm doing something like this:
mutation SignInMutation($input: SignInInput!){
signIn(input: $input){
user {
id
name
email
is_anonymous
notifications{
edges{
node {
id
...NotificationItem_notification
}
}
}
}
token
}
}
So my updater method has:
const viewer = store.get(viewer_id);
const signIn = store.getRootField('signIn');
viewer.copyFieldsFrom(signIn.getLinkedRecord('user'))
After this I updated the store I got the name email is_anonymous fields updated with the data that just came from the graphql endpoint (I mean now name is "Erick", is_anonymous is now false, which is great), but If I try to do viewer.notifications and render it, the length of the viewer.connections seem to be 0 even when it has notifications.
How can I update my current viewer and add the notifications from the MutationPayload into the store without the need to force fetch?
Im using the latest relay-modern and graphql.
PS: Sorry for the bad formation, but is just impossible to format the code the way OF wants me to, i formated it to 4 spaces and still gave me errors.
With some reorganisation of your GraphQL schema it might be possible to remove the need to interact directly with the Relay store after your sign-in mutation. Consider:
viewer {
id
currentUser {
name
email
}
}
When a user that is not logged in, currentUser would return null.
You could then modify your login mutation to be:
mutation SignInMutation($input: SignInInput!){
signIn(input: $input){
viewer {
id
currentUser {
name
email
token
}
}
}
}
Knowing the 'nullability' of the currentUser field provides an elegant way of determining if the user is logged in or not.
Based on the presence of the token field implies that you are using JWT or similar to track login status. You would need to store this token in local storage and attach it to the headers of the outgoing Relay requests to your GraphQL endpoint if it is present.
Storing the token itself would have to be done in the onCompleted callback of where you make the mutation request (you will have access to the payload returned by the server in the arguments of the callback function).
As an alternative to the token, you could also explore using cookies which would provide the same user experience but likely require less work to implement then JWT tokens.

How to use Phoenix.Token when registering a user

I know that the phoenix token can be generated as follows
Phoenix.Token.sign(MyApp.Endpoint, "user", user_id)
and based on the documentation, is suggested to use the user's id for the generation. The problem is that I'm trying to generate this token in the changeset, at the moment of user creation, thus I don't have any user's id yet, what would be the best way to use Phoenix.Token.sign? At the moment I'm using
put_change(:api_token, :base64.encode(:crypto.strong_rand_bytes(24)))
but I would like to use Phoenix.Token if possible.
thanks
You can use Phoenix.Token.sign in a meta tag, passing the connection, user salt, and user.id. Once signed, verify the token upon connecting to the socket. Remember, you must pass the same user salt when verifying as when the token was signed. For example, here is the meta tag in my application, which uses a "players" resource as opposed to "users"
app.html.eex
<%= tag :meta, name: "channel_token", content: Phoenix.Token.sign(#conn, "player auth", :player_id) %>
Note: I have the current players id stored in the session as player_id. I'm using "player auth" as the salt and the name of my token is "channel_token".
socket.js
var token = $("meta[name=channel_token]").attr("content");
var socket = new Socket("/socket", {
params: {
token: token
}
});
Grab the token named "channel token" and pass it into the socket params. Once the token is signed and passed into the socket you can verify it in the "connect" function in user_socket.

How to invalidate OAuth token when password is changed?

We use ASP.NET Identity in a Web Api project with SimpleAuthorizationServerProvider, we use OAuth-tokens to authorize each request coming from the client. (Tokens have and expire timespan, we don't use refresh tokens.)
When users change their password, I would like to invalidate the tokens they may have, possibly on other devices. Is there any way to explicitly do that? I experimented and saw that the existing tokens work without any problem after a password change, which should be prevented.
I thought about putting the password hash, or part of the hash in the OAuth token as a claim, and validating that in the OnAuthorization method of our derived AuthorizeAttribute filter.
Would this be a correct way to solve the problem?
I've based my approach on Taiseer's suggestion. The gist of the solution is the following. Every time a user changes their password (and when registers), a new GUID is generated and saved in the database in the User table. I call this GUID the password stamp, and store it in a property called LatestPasswordStamp.
This stamp has to be sent down to the client as part of the token as a claim. This can be achieved with the following code in the GrantResourceOwnerCredentials method of the OAuthAuthorizationServerProvider-implementation.
identity.AddClaim( new Claim( "PasswordTokenClaim", user.LatestPasswordStamp.ToString() ) );
This stamp is going to be sent from the client to the server in every request, and it is verified that the stamp has not been changed in the database. If it was, it means that the user changed their password, possibly from another device. The verification is done in our custom authorization filter like this.
public class AuthorizeAndCheckStampAttribute : AuthorizeAttribute
{
public override void OnAuthorization( HttpActionContext actionContext )
{
var claimsIdentity = actionContext.RequestContext.Principal.Identity as ClaimsIdentity;
if( claimsIdentity == null )
{
this.HandleUnauthorizedRequest( actionContext );
}
// Check if the password has been changed. If it was, this token should be not accepted any more.
// We generate a GUID stamp upon registration and every password change, and put it in every token issued.
var passwordTokenClaim = claimsIdentity.Claims.FirstOrDefault( c => c.Type == "PasswordTokenClaim" );
if( passwordTokenClaim == null )
{
// There was no stamp in the token.
this.HandleUnauthorizedRequest( actionContext );
}
else
{
MyContext ctx = (MyContext)System.Web.Mvc.DependencyResolver.Current.GetService( typeof( MyContext ) );
var userName = claimsIdentity.Claims.First( c => c.Type == ClaimTypes.Name ).Value;
if( ctx.Users.First( u => u.UserName == userName ).LatestPasswordStamp.ToString() != passwordTokenClaim.Value )
{
// The stamp has been changed in the DB.
this.HandleUnauthorizedRequest( actionContext );
}
}
base.OnAuthorization( actionContext );
}
}
This way the client gets an authorization error if it tries to authorize itself with a token which was issued before the password has been changed.
I do not recommend putting the hash of the password as claim, and I believe there is no direct way to invalidate token when password is changed.
But if you are Ok with hitting the DB with each request send from the client app to a protected API end point, then you need to store Token Identifier (Guid maybe) for each token granted to the resource owner requested it. Then you assign the token Identifier as a custom claim for this token, after this you need to check this table with each request by looking for the token identifier and the user name for the resource owner.
Once the password is changed you delete this token identifier record for this resource owner (user) and the next time the token sent from the client it will get rejected because the record for this token identifier and resource owner has been deleted.

Resources