cacti - multi cpu util - multi line OID - cpu

I have the OID: .1.3.6.1.2.1.25.3.3.1.2
I got 24 rows (I have 24 core server),
I want to create one graph with all the rows to see the utilization.
Please help me :)
Thanks...

Had the same problem and I created a data input methode in Perl which uses Net::SNMP.
Get the script here:
https://gist.github.com/1139477
Get the data template here:
https://gist.github.com/1237260
Put the script into $CACTI_HOME/scripts, make sure it's executable and import the template.
Make sure you got Perl's Net::SNMP installed.
Have fun!
Alex.

Related

How to run pelias with custom data

I have millions of Bangladesh address data. They are clean and well structured ( Poi, House,Road,Area,Postcode,latitude,longitude,etc )
Now I want to load them into pelias geocoder. Can you help me by suggesting the steps with installation process.
Note: I also have my own pbf
You can use the csv-import component.
See: https://github.com/pelias/csv-importer
An example I've used in the csv file:
name,source,layer,lon,lat,number,street,unit,city,district,region,postcode,id,addendum_json_custom
123 JACKSON AVE,custom,address,-86.66,40.7666,344,JACKSON AVE,,PERU,,IN,46970,651ds651d651,"{ ""customInfo"":""Something Custom"" }"
123 S 600 E,custom,address,-86.66,40.7666,4503,S 600 E,,PIERCETON,,IN,46562,651ewd2332e,"{ ""customInfo"":""Something Custom"" }"
Be sure to use the double quotes in the addendum json as specified in the documentation.
If you are doing Points of Interest, then you may choose a different layer like venue.
You will have to configure the pelias.conf with the source of the files for inport and the api target to specify the source > layer.
As for installation, I've found the dockerized situation to be best for getting started.
https://github.com/pelias/docker

How to read sql table on Nifi?

I am trying to create a basic flow on Nifi
read table from sql
process it on python
write back another table in sql
It is simple as it is.
But, I am facing issues when I try to read data on python
As far as I learn I need to use sys.stdin/out.
It only reads and writes as below.
import sys
import pandas as pd
file = pd.read_csv(sys.stdin)
file.to_csv(sys.stdout,index=False)
Below you can find processor properties, but I don't think it is the issue.
QueryDatabaseTableRecord:
ExecuteStreamCommand:
PutDatabaseRecord:
Error Message:
There's a much easier way to do this if you're running 1.12.0 or newer: ScriptedTransformRecord. It's like ExecuteScript except it works on a per-record basis. This is what a simple Groovy script for it looks like:
def fullName = record.getValue("FullName")
def nameParts = fullName.split(/[\s]{1,}/)
record.setValue("FirstName", nameParts[0])
record.setValue("LastName:", nameParts[1])
record
It's a new processor, so there's not that much documentation on it yet aside from the (very good) documentation bundled with it. So samples might be sparse at the moment. If you want to use and run into issues, feel free to join the nifi-users mailing list and asked for more detailed help.

Misinformation in DataStage XML Export

As the title suggests, I am just trying to do a simple export of a datastage job. The issue occurs when we export the XML and begin examination. For some reason, the wrong information is being pulled from the job and placed in the XML.
As an example the SQL in a transform of the job may be:
SELECT V1,V2,V3 FROM TABLE_1;
Whereas the XML for the same transform may produce:
SELECT V1,Y6,Y9 FROM TABLE_1,TABLE_2;
It makes no sense to me how the export of a job could be different then the actual architecture.
The parameters I am using to export are:
Exclude Read Only Items: No
Include Dependent Items: Yes
Include Source Code with Routines: Yes
Include Source Code with Job Executable: Yes
Include Source Content with Data Quality Specifications: No
What tool are you using to view the XML? Try using something less smart, such as Notepad or Wordpad. This will determine/eliminate whether the problem is with your XML viewer.
You might also try exporting in DSX format and examining that output, to see whether the same symptoms are visible there.
Thank you all for the feedback. I realized that the issue wasn't necessarily with the XML. It had to do with numerous factors within our data stage environment. As mentioned above, the data connections were old and unreliable. For some reason this does not impact our current production refresh, so it's a non issue.
The other issue was the way that the generated SQL and custom SQL options work when creating the XML. In my case, there were times when old code was kept in the system, but the option was switched from custom code to generate SQL based on columns. This lead to inconsistent output from my script. Thus the mini project was scrapped.

pig shell setup: automatically executing pig scripts

Is there a way to automatically run a pig script when invoking pig from command line?
The reason I'm wondering about this is that I have several import and define statements that I use constantly over and over to set everything up. Is it possible to define this collection of statements somewhere so that when I start pig, it will automatically execute those lines? I apologize in advance if this is something trivial that I missed from the documentation.
yes you can certainly do so from version 0.11 onwards.
You need to use .pigbootup file.
Here is a nice blogpost on setting up the pigbootup file
http://hadoopified.wordpress.com/2013/02/06/pig-specify-a-default-script/
If you want to include Pig-Macros from a file you can use the import command
Take a look at http://pig.apache.org/docs/r0.9.1/cont.html#import-macros for reference

Import Testcase in Testlink 1.9.9 from xlx format

I have to import my testcases which is in Excel into Testlink.How can I do it.Please help me out of this. Thanks in advance.
There's a decent macro available here: http://testlink-import.blogspot.co.uk/2009/11/test-link-import.html
It'll output an xml file but you'll probably find it doesn't import without some tweaking. I found it necessary to remove the <testsuite> tags and to add a <testcases> wrapper.
Also, if you add extra custom fields, you may find extra <customfields> tags.
I recommend running the macro with just one row of data and making sure you can import that first, then trying to do it in bulk. I still ended up with 1 record that just wouldn't import and I never got to the bottom of it, but the process was quicker overall than manually entering 190 test cases.

Resources