XDMoD : error "no aggregate table. have you ingested your data" - metrics

For running the XDMoD from http://open.xdmod.org/
on CentOS virtual machine. The command
xdmod-shredder -v -r resource -f pbs -d /var/spool/torque/server_priv/accounting
runs correctly and the ingester command
xdmod-ingestor -v
also. But still error msg shows up when I open up web browser saying " have you ingested your data". the problem is still unknown. kindly help.

Related

Facing error with JMeter in Non-GUI mode. Error: "Malformed option -"

I am using Jmeter(5.1.1) in Non-GUI mode and unable to execute it. It still runs fine in GUI mode but fails in Non-GUI with "Error: Malformed option -". I removed the listeners from the JMX file but it still has some HTTP Request which are disabled will that be the problem here.?
I am currently using this command in my Windows 7 machine with Java 8.
Commands:
jmeter -n -t (location: C:\Users\File.jmx) -l (location: C:\Users\Results.csv)
Its not showing results neither updating the results.csv file, after each run its just showing "
Error: Malformed option -
" in command window. Can someone tell me what i am missing or what should i follow.?
Please use it like the below:-
jmeter -n -t D:\TestScripts\script.jmx -l D:\TestScripts\scriptresults.jtl
Hope this helps.
First you have to go to Jmeter/bin path where your Jmeter jar is located.
(For Example, C:\apache-jmeter-5.0\bin)
Now after going to that path using cmd, use below command given that your abc.jmx file is at the above mentioned location only and once execution starts it will automatically create def.jtl at that location, hope this have given you clarity:
jmeter -n -t abc.jmx -l def.jtl
Ok, so here what i was doing wrong. Dont know if this is stupid but might help others.
Since we use JMeter at work locations, my drive was linked with the cloud folder where none of the commands get execute i believe, so when i copy the installed JMeter folder and my JMX file to a Local location, say
before:
> C:\UsersABC\OneDrive-UserABC\JMETER\apache-jmeter-5.1.1\bin
Now:
> C:\JMET\apache-jmeter-5.1.1\bin
That OneDrive causing all the trouble. But thanks to all of you guys, you helped me figured this out. Thanks alot again :)
I got the same issue while trying with "jmeter -n -t D:\load test 01\test.jmx -l D:\TestScripts\scriptresults.jtl" command.
Then I changed it to "jmeter -n -t D:\load_test_01\test.jmx -l D:\TestScripts\scriptresults.jtl" and it worked fine, in my case the issue was whitespaces in the folder name.

/var/log/cloud-init-output.log is not present on RHEL 7.5

I've got a custom hardended RHEL 7.5 custom AMI. I want to use user data to complete some deploy time configuration. I've already ensured that /var/lib/cloud/* is removed before I create the AMI.
These are the contents of my user data:
echo "My script fired." >> /tmp/test.txt
echo "This line should produce an output log."
The file /tmp/test.txt is present, indicating the my script did indeed run. However, the expected result of the second statement is that a file /var/log/cloud-init-output.log should be produced in accordance with the AWS docs. This file is not present.
How do I make sure that user data produces the expected output log file?
It appears that Red Hat felt the file was "completely unnecessary": https://bugzilla.redhat.com/show_bug.cgi?id=1424612
In order to view user data output, journalctl logs will need to be grepped:
sudo grep cloud-init /var/log/messages

How to test if hbase is correctly running

I just installed hbase on a EC2 server (I also have HDFS installed, it's working).
My problem is that I don't know how to check if my Hbase is correctly installed.
To install hbase I followed this tutorial in wich they say we can check the hbase instance on the webUI on the address addressOfMyMachine:60010, I also checked on the port 16010 but this is not working.
I have an error saying this :
Sorry, the page you are looking for is currently unavailable.
Please try again later.
If you are the system administrator of this resource then you should check the error log for details.
I managed to run the hbase shell but I don't know if my installation is working well.
To check if Hbase is running or not using shell script, execute the command below.
if echo -e "list" | hbase shell 2>&1 | grep -q "ERROR:" 2>/dev/null ;then echo "Hbase is not running"; fi

How to find the cluster name in Hadoop environment?

I am using CDH 5.5 and want to know any command or a way to find the clustername?
I am actually trying to execute the below api call and it throws an error.
curl -u admin:admin 'http://localhost:7180/api/v1/clusters/namenode241'
error:
{
"message" : "Cluster 'namenodee241' not found."
}
Your command is corrent except one last part. In your command you are mentioning the Cluster name namenode241. So remove that and execute
curl -u admin:admin 'http://localhost or hostname:7180/api/v1/clusters'
Can't you just just access to cloudera manager?, it's usually on port 7180 and it will show you the cluster name just in the home page. In the screenshot below it's "Cluster 1"

Start Vertica database during boot on Linux

I have Vertica installed in an Ubuntu virtual machine and I'd like to have a specific database started during the boot, instead of me having to login, open admintools and start from there.
So, is there a command line that would allow me to start it without user interaction?
In which run level should I add this?
Also, I use a specific user to run everything Vertica related, does this need to be taken into account in my boot script?
Why not just set the restart policy(reboot on boot) in your admintools "Set Restart Policy"
You have 3 option :
Never
ksafe
always -- chose this one to start on boot.
And that is it !
admintools -t start_db
[dbadmin#hostname ~]$ admintools -t start_db --help
Usage: start_db [options]
Options:
-h, --help show this help message and exit
-d DB, --database=DB Name of database to be started
-p DBPASSWORD, --password=DBPASSWORD
Database password in single quotes
-i, --noprompts do not stop and wait for user input(default false)
-F, --force force the database to start at an epoch before data
consistency problems were detected.

Resources