When launching new 'unnamed' elasticsearch nodes (?) I see a unique name displayed in the debugging output, in this case the node is called: Riot, other gems include: "Oneg the Prober"
org.elasticsearch.env: [Riot] max file descriptors [10240] for elasticsearch process likely too low, consider increasing to at least [65536]
There is always a clever unique name coming from somewhere? I've looked for this line in the source-code and can only find a reference to Locale.ROOT - but I cannot find the call to fetch a new unique name and I think they're always funny and would like to use a similar generator (:
The list of names is in elastic/elasticsearch/core/src/main/resources/config/names.txt
That file was removed, in the context of making node names persistent. The change was merged to master in 2016, quite some time before this question was asked.
You can find "Riot" on line 2131 and "Oneg the Prober" on line 1898.
Came across this list which has 2938 character names being used, not sure if this is the original source.
https://github.com/jprante/elasticsearch-server/blob/master/elasticsearch-server-node/src/main/resources/config/names.txt
Related
How can I get the value of important id and ValueType?
I have tried using web_save_param_regexp (but unfortunately I don't fully understand how the function works).
I have also tried using web_save_param (with the help of offset and length).
unfortunately once again I cannot get the accurate value some values change in length specially when the total amount values dynamically changes per run.
<important id=\"insertsomevalueshere\" record=\"1\" nucTotal=\"NUC609.40\"><total amount=\"68.75\" currency=\"USD\"/><total amount=\"609.40\" currency=\"USD\"/><out avgsomecost=\"540.65\" ValueType=\"insertsomevalueshere\" containsawesomeness=\"1\" Score=\"-97961\" somedatatype=\"1\" typeofData=\"VAL\" web=\"1\">
Put these lines of code before the line of code which does your web request:
web_reg_save_param_regexp("ParamName=importantid","Regexp=<important id=\\\"(.*?)\\\"",LAST);
web_reg_save_param_regexp("ParamName=ValueType","Regexp= ValueType=\\\"(.*?)\\\"",LAST);
You will then have two stored parameters 'importantid' and 'ValueType'
Dynamic number of elements to correlate? Your path for resubmission is through web_custom_request(). You will need to build the string you need dynamically with the name:value pairs for all of the data which needs to be included.
This path will place a premium on your string manipulation skills in the language of the tool. The default path is through C, but you have other language options if your skills are more refined in another language.
I am writing an extension to my companies existing SNMP MIB. I have a whole list of objects, with the same properties on each. I want to be able to get and set these through SNMP.
So for example, consider my object has name, desc, arg0, arg1. What I want is to be able to refer to these as:
fullpath.objects.ObjectA.name
fullpath.objects.ObjectA.desc
fullpath.objects.ObjectA.arg0
fullpath.objects.ObjectB.name
fullpath.objects.ObjectB.desc
fullpath.objects.ObjectB.arg0
However the leaf nodes appear to have to have unique names, so I am unable to define this.
I can use a SNMP table to produce:
fullpath.objects.table.name.1
fullpath.objects.table.desc.1
fullpath.objects.table.arg0.1
fullpath.objects.table.name.2
fullpath.objects.table.desc.2
fullpath.objects.table.arg0.2
But there is nowhere to look up that 2 means ObjectB. This leaves it open to user error looking up the wrong value and setting the wrong thing.
At the moment the best solution I can see is:
fullpath.objects.ObjectAName
fullpath.objects.ObjectADesc
fullpath.objects.ObjectAArg0
fullpath.objects.ObjectBName
fullpath.objects.ObjectBDesc
fullpath.objects.ObjectBArg0
which involves defining name for every object (there are 20 or so of them). The set of objects is fixed, so this is ok...just not very tidy.
Is there some way to define names for index in the table?
Is there some way of defining a container type?
Is there some way of allowing leaf nodes to be non-unique?
Any other ideas?
You should definitely use SNMP tables to accomplish what is required. This is the only way.
MIB Object names must be unique within entire MIB file.
You can easily use object of OCTET STRING type as Table index. So each byte/symbol/char of OCTET STRING value will be translated to corresponding numeric ASCII code in OID.
I ended up just using a naming convention and adding each of the settings directly into the MIB.
Not really the answer I wanted, but it means that all of the settings show up in the MIB, and that reduces the chance of users setting the wrong setting.
Writing script in LR for Siebel Open UI. All my requests contains this parameter, with different values. What does it mean?
Examples (from different requests):
"Name=SWEIPS", Value = #0'0'1'0'GetProfileAttr'3'attrName'SBRF Position Id'"
"Name=SWEIPS", Value = #0'0''0'3'1-SQE21A, 1-SQL21E, 1SQE31"
And so on.
Can I simple delete it?
Can I simply delete it? - No, you’re not supposed to delete it.
Compare SWEIPS value by recording twice or trice with different data sets, check is there any date/time values in SWEIPS. If there is nothing to correlate leave as it is, no need to delete.
Ensure to correlate values like SWET,ROWID,SWECount,SWEC and so on.
While investigating a ConflictError (see this previous question) I saw a lot of persistent.mapping.PersistentMapping conflicts.
Looking at a specific one it turned out to be a PersistentMapping for plone.scale.
Turns out that a random object with just one image has 562 keys on it, no wonder why it gets a conflict error...
Some context on the object that holds this plone.scale annotation:
- dexterity content type
- one of its behaviors has an image field (plone.namedfile.field.NamedBlobImage)
The code to see it is as following:
Start a debugging instance: ./bin/instance debug
from ZODB.utils import p64
OID = 0x568428 # got from zeo client logs
mapping = app._p_jar[p64(OID)]
len(mapping) # that returns 562
The mysterious part is that only 4 keys on that persistent mapping are tuples, while the other 558 are just hashes.
A brief look at plone.scale.storage.AnnotationStorage.scale method seems to imply that there should be only one to one relation from tuples and hashes keys on the persistent mapping.
Further investigating the elements reveals that, indeed, if you look at the width and height properties from all elements there are only 4 different combinations (the ones from the tuples itself).
As a new scale is generated whenever the modified time is bigger (see the scale method pointed above) and plone.namedfield.scaling.ImageScaling.modified uses context as the source for modified, that means that at every single update of the object a new scale will be generated?
So two questions arise from the previous:
my assumption of only 4 scales are really used and the other 558 are old and useless is true?
provided a yes on the previous, shouldn't they be cleaned up then?
You may be right, but surely the correct place to report this is https://dev.plone.org/newticket
I have over 200,000 accessions in a flat file, which need to retrieve relevant entry from NBCI.
I use Batch Entrez (http://www.ncbi.nlm.nih.gov/sites/batchentrez) to do the job. But encountered several problems:
The initial file was splitted into multiple sub-files, each containing 4000 lines. But it seems Batch Entrez has some size limitation on the returned file. For example: if the first 1000 accessions all have tens of thousands lines which reach the size limitation, then the rest 3000 accessions will be rejected and won't be searched.
One possible solution in my head is to split the file into more sub-files and search individually. However this requires too much manual effort.
So I am just wondering if there is any other solution, or any code could be used.
Thanks in advance
Your problem sounds a good fit for a Bio-star toolkit. This is a solution using BioSmalltalk
| giList gbReader |
giList := (BioObject openFullFileNamed: 'd:\Batch_entrez_1.txt') contents lines.
gbReader := BioNCBIGenBankReader new.
gbReader
genBankRecordsFrom: 'nuccore'
format: #setModeXML
uids: giList.
(BioGBSeqCollection newFromXMLCollection: gbReader searchResults)
collect: [: e | BioParser
tokenizeNcbiXmlBlast: e contents
nodes: #('GBAuthor' 'GBSeq_definition') ]
To execute/debug the script, just select it and a right-click will open the Smalltalk world-menu.
The API automatically split and fetch your accession list (in the script contained in Batch_entrez_1.txt) maintaining the NCBI Entrez post limits to avoid penalities.
The result format is XML (which is an "easy" format to parse or filter specific fields) although it could be any of the retrieval modes supported by Entrez, for example setting #setModeText will answer an ASN.1 representation. Replace 'nuccore' for the database you want to query. Finally choose the interesting fields, in the script I have choosed 'GBAuthor' and 'GBSeq_definition', but you are free to choose anyone of the available nodes.