How to clear "stmt" in heroku?? (node.js) - heroku

sqlMessage: "Can't create more than max_prepared_stmt_count statements (current value: 16382)"
from the error above I want to know how to clear the stmt in heroku
because rn heroku app doesn't allow to increase max_prepared like set global max_prepared_stmt_count=99999; >==(show)==> Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation
I was show status like 'Prepared_stmt_count;' it said it has
this value

Related

How would I go about preventing the user from going into nonexistent records in a database (Visual basic 6.0 using Microsoft DAO 2.5/3.51)

Firstly, I am using visual basic 6 because that is what my school teaches.
We are meant to do a practice project for a supposed client that wants a program that allows employees to interact with a database.
I have made the First, prev,next and last button for record navigation, however, the issue is that the user is allowed beyond existing records using prev or next and it crashes the program with the error that there is no value
I have tried things like:
Making 2 counters, one for the current record that you are on and one for how many records are in the selected recordset. What I thought it would do is that if your current record tries to surpass the recordcount, it will cancel the action, but in the end the recordset.recordcount always returned 1 or 0 for some reason
I have also tried testing for if the primary key field is blank, but it returns the error that there is no such record
So how would I go about limiting the user from going beyond the records?
In the end, I was able to use an error handler for my problem to handle the error of the user trying to go beyond existing records.
What I did was:
On Error GoTo ErrHandler 'If any error happens go to ErrHandler
recordset.movenext
setFields 'Subroutine for setting the text boxes to their values
ErrHandler:
recordset.movelast 'If the user tries to go beyond the last record move him back
Update 2: I have also tried using the BOF and EOF people have been saying in the comments. I first had some confusion regarding them because I thought these properties were checking if it was the first record or the last record, but actually it was checking if you were outside of the file.
recordset.movenext 'move to the next record
If recordset.EOF = True Then 'If the user is outside the file
recordset.moveprevious 'Move him back
Else 'If he is not
setFields 'Set the fields
End If 'End the if
This is much simpler than the error handler.

golang get kubernetes resources(30000+ configmaps) failed

I want to use client-go to get resources in Kubernetes cluster. Due to a large amount of data, when I get the configmap connection is closed.
stream error when reading response body, may be caused by closed connection. Please retry. Original error: stream error: stream ID 695; INTERNAL_ERROR
configmaps:
$ kubectl -n kube-system get cm |wc -l
35937
code:
cms, err := client.CoreV1().ConfigMaps(kube-system).List(context.TODO(), v1.ListOptions{})
I try to use Limit parameter, I can get some data, but I don’t know how to get all.
cms, err := client.CoreV1().ConfigMaps(kube-system).List(context.TODO(), v1.ListOptions{Limit: 1000 })
I'm new to Go. Any pointers as to how to go about it would be greatly appreciated.
The documentation for v1.ListOptions describes how it works:
limit is a maximum number of responses to return for a list call. If more items exist, the
server will set the continue field on the list metadata to a value that can be used with the
same initial query to retrieve the next set of results.
This means that you should examine the response, save the value of the continue field (as well as the actual results), then reissue the same command but with continue set to the just seen value. Repeat until the returned continue field is empty (or an error occurs).
See the API concepts page for details on handling chunking of large results.
You should use a ListPager to paginate requests that need to query many objects. The ListPager includes buffering pages, so it has improved performance over simply using the Limit and Continue values.

In Substrate what does code: 1012 "Transaction is temporarily banned" mean?

The full text of the message is :
{code: 1012, message: "Transaction is temporarily banned"}
This would indicate that the transaction is held somewhere in Substrate Runtime mempool or something of that nature, but it is not entirely clear what possible causes can trigger this, and what the eventual outcome might be.
For example,
1) is it that too many transactions have been sent from a given account, IP address or other? Has some threshold been reached?
2) is the transaction actually invalid, or not?
3) The use of the word "temporary" suggests a delay in processing, not an outright rejection of the transaction. Therefore does this suggest that the transaction is valid, but delayed? If so, for how long?
The comments in the substrate runtime core/rpc/src/author/errors.rs and core/transaction-pool/graph/src/errors.rs is no clearer about what is the outcome.
In front of the mempool, exists a transaction blacklist, which can trigger this error. Specifically, this error means that a transaction with the same hash was either:
Part of recently mined block
Detected as invalid during block production and removed from the pool.
Additionally, this error can occur when:
The transaction reaches it's longevity, i.e. is not mined for TransactionValidation::longevity blocks after being imported to the pool.
By default longevity is set to u64::max so this normally should not be the problem.
In any case -ltxpool=log should reveal more details around this error.
A transaction is only temporarily banned because it will be removed from the blacklist when either:
30 minutes pass
There are more than 4,000 transactions on the blacklist
Check out core/transaction-pool/graph/src/rotator.rs.

Where is the best place to store an application setting that needs to be updated frequently in ServiceNow

I have a scheduled script execution that needs to persist a value between runs. It is updated with each run. Using gs.setProperty seemed like the natural place until I came across this:
Care should be taken when setting system properties (sys_properties)
using this method as it causes a system-wide cache flush. Each flush
can cause system degradation while the caches rebuild. If a value must
be updated often, it should not be stored as a system property. In
general, you should only place values in the sys_properties table that
do not frequently change.
Creating a separate table to store a single scalar value seems like overkill. Is there a better place to store it?
You could set a preference if you need it in the instance. Another place could be the events table. Log the event with the data in parm1 or parm2 and on next run query the most recent event.
I'd avoid making a table as that has cost implications for some clients. I agree with the sys_properties.
var encrypter = new GlideEncrypter();
var encrypted = encrypter.encrypt('Super Secret Phrase');
gs.info('encrypted: ' + encrypted);
var decrypted = encrypter.decrypt(encrypted);
gs.info('decrypted: ' + decrypted);
/**
*** Script: encrypted: g/bXLJHa7xNRMKZEo5q/YtLMEdse36ED
*** Script: decrypted: Super Secret Phrase
*/
This way only administrators could really read this data. Also if I recall correctly, the sysevent table is cleared after 7 days. You could have the job remove the event as soon as it has it in memory.

Weblogic 10.3, JDBC, Oracle, SQL - Table or View does not exist

I've got a really odd issue that I've not had any success googling for.
It started happening with no changes to the DB, connection settings, code etc.
Problem is, when accessing a servlet, one of the EJB's is doing a direct SQL call, very simple
"select \n" +
" value, \n" +
" other_value \n" +
" from \n" +
" some_table \n" +
" where some_condition = ? "
That's obviously not the direct SQL, but pretty close. For some reason, this started returning an error stating "ORA-00942: table or view does not exist".
The table exists, and the kicker is if I hook in a debugger, change a space or something minor (not changing the query itself) in the query, and hot-deploy the change, it works fine. This isn't the first time I've run across this. It only seems to happen in dev environments (haven't seen it in q/a, sandbox, or production yet), is not always replicable, and driving me seriously insane.
By not always replicable I mean that occasionally a clean build & redeploy will sometimes fix the problem, but not always. It's not always the same table (although if the error occurs it continues with the same query).
Just throwing a feeler out there to see if anybody has run into issues like this before, and what they may have discovered to fix it.
Sounds like maybe one connection in your JDBC pool has a problem, which could explain the intermittent nature and that redeploy only sometimes fixes it, as you could end up still using the same connection afterwards. You could try resetting the connection pool instead of redeploying, perhaps. (java weblogic.Admin -url t3://<server_url> RESET_POOL <pool_name>, I think)
You've said there's only one schema, but does that mean only one schema exists or that all the tables are under one schema? Is it possible that you're doing an ALTER SESSION SET CURRENT_SCHEMA somewhere, and when whichever connection that's issued against is returned to the pool and then randomly used for the query later it can't see anything in the main schema any more? That could happen in a package or trigger as well as from the Java side, and could be a 'temporary' change that doesn't get reverted after an exception. Sounds like something that might only exist in a dev environment, too...

Resources