Problem with solving the task related to operating on incidents in service now - servicenow

I am new to service now and have a problem with how to do two tasks.
what is 1. it seems to me that it should be done using business roles, but I don't know what script to use here ... I blocked.
In the second case, I don't know where to start. Can someone help?
As an itil user, I should be able to move an incident to an On Hold state and specify a Date/Time. When this Date/Time is reached, the incident state should change to Update Required. This is so I am reminded that an incident I was working on requires my attention.
As an itil user, I should not be able to select a priority for an incident that is lower priority than any of its children.
If I select a priority for a child incident that is higher than its parent, then I should receive a warning on submit. If I choose to submit anyway, then the parent incident’s priority should be updated to match the child’s.

In my opinion you should consider using SLAs to achieve this requirement. However If you want to achieve It by using BR, you need to hold "the specific date" on Incident table by creating a custom date field. Then you could run the below after BR
var tdy = new GlideDate();
var dt = current.<custom_field_name> ;
if (tdy == dt) {
current.state = 3 // backend value of "On Hold" state
current.update() ;
}
You could use BR again like below on Incident table
var myVar = current.priority ;
var prt = parent.priority ;
if (myVar < prt) {
gs.gs.addErrorMessage("Your message");
current.priority = parent.priority ;
current.update();
}

Related

how can i update entity synchonous in spring jpa

i have an issue :
i have a product entity which have 2 columns id and quantity
_so i have 2 api
one for update this will update product entity quantity (quantity = quantity - 1)
one for update this will update product entity quantity (quantity = quantity + 1)
the issue is then I call 2 api in the same time, this result not my expect here is my diagram
enter image description here
can anyone help my thank you
Well for your particular scenario there is a concept called locking. And there is two type of locking
Optimistic
Pessimistic
The idea is when one transaction is updating a row of a db table, you should not allow another transaction to update that row util the previous one is committed.
In application there are several ways to achieve this type of locking. We can describe this concurrent updating process as a Collision. In a system where a collision is not going to happen very frequently you can use the optimisting locking approach.
In optimistic approach you keep a version number in your row. When you perform an update you increase the version by 1. Let's analyse your scenario now and call your two service I (increase) and D (decrease). You have a product row P in your database table where quantity = 3, version = 0. When I and D is called for both of them when they fetch P from database the state of P is as below
quantity = 3, version = 0
Now D executes first and decrease and save P
Your update query should be like below
UPDATE Product p set p.quantity = :newQuantity
, p.version = p.version + 1 where p.version = :oldVersion and p.id = :id
For case of D value of newQuantity = 2 (oldQty - 1) and value of oldVersion = 0 (we fetched it at the beginning)
Now the current state of P is like below
quantity = 2, version = 1
Now when I tries to execute you should generate the same update query but for this case value of newQuantity = 4 (oldQty +1) and value of oldVersion = 0 (we fetched it at the beginning).
If you put these value to the update query your row won't be updated as it the version checking part will be false. From this you can then throw any locking exception to notify your client that the request could not be completed and can try again. This is basically the core concept of optimistic locking and there is much more efficient ways to handle it with frameworks like Hibernate
Here you can notice that we have not denied any of the read requests while updating the row but in the approach of Pessimistic locking you deny any read request when another transaction on going. So basically when D is on process of decreasing I would not be able to read the value and from there you can return to your client saying that the request was not completed. But this approach takes a toll on read heavy tables in exchange of tight data integrity.

Kafka Streams: Add Sequence to each message within a group of message

Set Up
Kafka 2.5
Apache KStreams 2.4
Deployment to Openshift(Containerized)
Objective
Group a set of messages from a topic using a set of value attributes & assign a unique group identifier
-- This can be achieved by using selectKey and groupByKey
originalStreamFromTopic
.selectKey((k,v)-> String.join("|",v.attribute1,v.attribute2))
.groupByKey()
groupedStream.mapValues((k,v)->
{
v.setGroupKey(k);
return v;
});
For each message within a specific group , create a new message with an itemCount number as one of the attributes
e.g. A group with key "keypart1|keyPart2" can have 10 messages and each of the message should have an incremental id from 1 through 10.
aggregate?
count and some additional StateStore based implementation.
One of the options (that i listed above), can make use of a couple of state stores
state store 1-> Mapping of each groupId and individual Item (KTable)
state store 2 -> Count per groupId (KTable)
A join of these 2 tables to stamp a sequence on the message as they get published to the final topic.
Other statistics:
Average number of messages per group would be in some 1000s except for an outlier case where it can go upto 500k.
In general the candidates for a group should be made available on the source within a span of 15 mins max.
Following points are of concern from the optimum solution perspective .
I am still not clear how i would be able to stamp a sequence number on the messages unless some kind of state store is used for keeping track of messages published within a group.
Use of KTable and state stores (either explicit usage or implicitly by the use of KTable) , would add to the state store size considerably.
Given the problem involves some kind of tasteful processing , the state store cant be avoided but any possible optimizations might be useful.
Any thoughts or references to similar patterns would be helpful.
You can use one state store with which you maintain the ID for each composite key. When you get a message you select a new composite key and then you lookup the next ID for the composite key in the state store. You stamp the message with the new ID that you just looked up. Finally, you increase the ID and write it back to the state store.
Code-wise, it would be something like:
// create state store
StoreBuilder<KeyValueStore<String,String>> keyValueStoreBuilder = Stores.keyValueStoreBuilder(
Stores.persistentKeyValueStore("idMaintainer"),
Serdes.String(),
Serdes.Long()
);
// add store
builder.addStateStore(keyValueStoreBuilder);
originalStreamFromTopic
.selectKey((k,v)-> String.join("|",v.attribute1,v.attribute2))
.repartition()
.transformValues(() -> new ValueTransformer() {
private StateStore state;
void init(ProcessorContext context) {
state = context.getStateStore("idMaintainer");
}
NewValueType transform(V value) {
// your logic to:
// - get the ID for the new composite key,
// - stamp the record
// - increase the ID
// - write the ID back to the state store
// - return the stamped record
}
void close() {
}
}, "idMaintainer")
.to("output-topic");
You do not need to worry about concurrent access to the state store because in Kafka Streams same keys are processed by one single task and tasks do not share state stores. That means, your new composite keys with the same value will be processed by one single task that exclusively maintains the IDs for the composite keys in its state store.

Prevent Available stock become minus

It's possible that in Adempiere, Inventory Stock become negative. One of the way to make it become negative is when we put Quantity in Internal Use Inventory more than Available Stock in Warehouse.
Example
Product Info
------------
Product || Qty
Fertilizer || 15
It's shown in Product Info that the current Qty of Fertilizer is 15. Then I make Internal Use Inventory document
Internal Use Inventory
----------------------
Product || Qty
Fertilizer || 25
When I complete it, Quantity will be -10. How can I prevent Internal Use Inventory being completed when Quantity is more than Available Stock so that I can avoid negative stock ?
This is purposely designed functionality of Adempiere. Under some scenarios the Inventory is allowed to become negative because it is felt in those scenarios it is better to allow the process to complete but being negative it highlights a problem that must be addressed. In the case of the Internal Use the user is warned, that the stock will go negative if they proceed.
To change this standard functionality you need to modify the
org.compiere.model.MInventory.completeIt()
But if you change the code directly, it will make it more difficult to keep your version in sync with the base Adempiere or even just applying patches.
The recommended approach would be to add a Model Validator. This is a mechanism that watches the underlying data model and enables additional code/logic to be injected when a specific event occurs.
The event you want is the Document Event TIMING_BEFORE_COMPLETE.
You would create a new model validator as described in the link, register it in Adempiere's Application Dictionary and since you want your code to trigger when Inventory Document Type is executed you would add a method something like this
public String docValidate (PO po, int timing)
{
if (timing == TIMING_BEFORE_COMPLETE) {
if (po.get_TableName().equals(MInventory.Table_Name))
{
// your code to be executed
// it is executed just before any (internal or physical)
// Inventory is Completed
}
}
return null;
} // docValidate
A word of warning; the Internal Use functionality is the same used by Physical Inventory (i.e. a stock count) functionality! They just have different windows in Adempiere. So be sure to test both functionalities after any change is applied. From the core org.compiere.model.MInventory there is a hint as how you might differentiate the two.
//Get Quantity Internal Use
BigDecimal qtyDiff = line.getQtyInternalUse().negate();
//If Quantity Internal Use = Zero Then Physical Inventory Else Internal Use Inventory
if (qtyDiff.signum() == 0)
In order to prevent the stock from being negative you can use two methods
Callout in Code
BeforeSave Method
In order to apply it in Callout you need to create a Callout class and get the current Stock Qty at the locator there and then subtract the qty from entered Qty and if the result is less than 0 , display the error. Apply this on the Qty field and you will get the desired result.
This is slightly better way , as this doesn't needs creating a new class in code altogether and will consume less memory altogether , Search for MInventoryLine class in code and then search for beforesave() in it. Add the same code (getting the stock and then subtracting it from the entered Qty). The Code in beforesave() will be like this
if (your condition here) { log.saveError("Could Not Save - Error", "Qty less than 0"); return false; }
Now I am assuming that you know the basic code to create a Callout and design a condition, If you need any help , let me know.
Adding to the answer of Mr. Colin, please see the below code to restrict the negative inventory from the M_Inventory based transaction. You can consider the same concept in M_Inout and M_Movement Tables also.
for (MInventoryLine line : mInventory.getLines(true))
{
String blockNegativeQty = MSysConfig.getValue("BLOCK_NEGATIVE_QUANTITY_IN_MATERIAL_ISSUE","Y", getAD_Client_ID());
if(blockNegativeQty.equals("Y"))
{
//Always check the on Hand Qty not Qty Book Value. The old Drafted Qty Book Value may be changed alredy.
String sql = "SELECT adempiere.bomqtyonhand(?, ?, ?)";
BigDecimal onhandQty = DB.getSQLValueBD(line.get_TrxName(), sql, line.getM_Product_ID(), mInventory.getM_Warehouse_ID()
, line.getM_Locator_ID());
BigDecimal QtyMove = onhandQty.subtract(line.getQtyInternalUse());
if(QtyMove.compareTo(Env.ZERO) < 0)
{
log.warning("Negative Movement Quantity is restricted. Qty On Hand = " + onhandQty
+ " AND Qty Internal Use = " + line.getQtyInternalUse()
+ ". The Movement Qty is = " + QtyMove);
negativeCount ++;
negativeProducts = negativeProducts + line.getM_Product().getName() + ": " + onhandQty + " ; ";
}
}
}
if(negativeCount > 0)
{
m_processMsg = "Negative Inventory Restricted. "
+ " Restriction has been enabled through the client level system configurator."
+ " Products : " + negativeProducts;
}
return m_processMsg;

Linq query within a date range on several tables?

I'm trying to find a way to detect spam in my SignalR chat application. Every time a user sends a messsage I would like to check the amount of messages sent by that user the last 5 seconds. I have a database with 2 tables: Message and User, where messages gets logged with a MessageDate and users with UserID. There is a Many-to-One relationship between the tables (1 user per message, several messages per user).
How can I write a query to check for messages sent by a specific user the last 5 seconds?
I have tried looking for a solution online but I'm new to queries and it's hard to get everything right (the join, the range of dates, using count property and getting data models right).
The closest I've gotten is something like :
var db = new MessageContext();
int messageCount = (from op in db.Message
join pg in db.User on op.UserID equals pg.UserID
where pg.UserID == op.UserID
&& (a.Start.Date >= DateTime.Now.AddSeconds(-5)
&& a.Start.Date <= DateTime.Now)
select op)
.Count();
Thanks in advance, any help appriciated!
Actually i don't see why you need a join at all, if you have a many to many Relationship you shouldn't need a join (you should have navigation properties) so the code should look like this:
var q = db.Users
.Select(usr=>
new
{
User = usr,
LastMessages = usr.Messages
.OrderByDescending(msg=>msg.Date)
.Take(5)
})
.Where(usr=>usr.LastMessages.All(msg=>msg.UpdateDate >= 5 minutes from now)
// Here q contains all the users that have posted 5 messages or more in the last 5 minutes as well as those last messages.

How can this query be improved?

I am using LINQ to write a query - one query shows all active customers , and another shows all active as well as inactive customers.
if(showall)
{
var prod = Dataclass.Customers.Where(multiple factors ) (all inactive + active)
}
else
{
var prod = Dataclass.Customers.Where(multiple factors & active=true) (only active)
}
Can I do this using only one query? The issue is that, multiple factors are repeated in both the queries
thanks
var customers = Dataclass.Customers.Where(multiple factors);
var activeCust = customers.Where(x => x.active);
I really don't understand the question either. I wouldn't want to make this a one-liner because it would make the code unreadable
I'm assuming you are trying to minimze the number of roundtrips?
If "multiple factors" is the same, you can just filter for active users after your first query:
var onlyActive = prod.Where(p => p.active == true);
Wouldn't you just use your first query to return all customers?? If not you'd be returning the active users twice.
Options I'd consider
Bring all customers once, order by 'status' column so you can easily split them into two sets
Focus on minimizing DB roundtrips. Whatever you do in the front end costs an order of magnitude less than going to the DB.
Revise user requirements. For ex. consider paging on results - it's unlikely that end user will need all customers at once.

Resources