I need to report hadoop metrics (such as jvm, cldb) to a text file. I modified hadoop-metrics file in a conf directory on one of the nodes to test, but output files still didn't appear.
I tried to restart YARN-nodemanager and node itself, but still no result.
Am I need to do some additional magic, like changing env variables or other configs?
The problem was with a wrong config. I've been using a sample config file which supposed to report Namenode and Resourcemanager metrics.
But my node didn't have both of them.
Added other metrics. Works fine.
Related
I have a hadoop cluster consisting of 4 nodes on which I am running a pyspark script. I have a config.ini file which contains details like locations of certificates, passwords, server names etc which are needed by the script. Each time this file is updated I need to sync the changes across all 4 nodes. Is there a way to avoid that?
I have needed to sync or update changes to my script. Making them on just one node and running it from there is enough. Is the same possible for the config file?
The most secure answer is likely to learn how to use a keystore with spark.
A little less secure but still good. Have you considered you could just put the file in HDFS and then just reference it? (lower security but easier to use)
Unsecure methods that are easy to use:
You can also pass it as a file to spark-submit to transfer the file for you.
Or you could add the values to your spark submit.
I'm using Flink 1.1.4 and have added to flink-conf.yaml the configuration parameters for memory debugging, as stated in Memory and Performance Debugging:
taskmanager.debug.memory.startLogThread: true
taskmanager.debug.memory.logIntervalMs: 1000
After restarting Flink, I'm seeing the new parameters added to the Job Manager interface, but I'm unable to see any new logs.
Any idea about what I may be missing?
It seems this was resolved in this mailinglist
Key extracts, including one that confirmed the exact settings were tested succesfully:
That is exactly the right way to do it. Logging has to be at least
INFO and the parameter "taskmanager.debug.memory.startLogThread" set
to true. The log output should be under
"org.apache.flink.runtime.taskmanager.TaskManager".
Do you see other outputs for that class in the log?
Make sure you restarted the TaskManager processes after you changed
the config file.
Someone else just used the memory logging with the exact described
settings - it worked.
There is probably some mixup, you may be looking into the wrong log
file, or may setting the a value in a different config...
How do you start the flink cluster? If it's a standalone cluster and
you don't use a shared directory, then you'll find the log of the
taskmanager on the machine on which the taskmanager runs. If you use
YARN then you can activate log aggregation to retrieve the log easily
after the job has finished.
I am building my own docker image for the elasticsearch application.
One question I have : will the configuration file elasticsearch.yml be modified by the application on the fly?
I hope that will never happen even if the node is running in cluster. But some other application (like redis), they modify the config file on the fly when cluster status changes. And if the configuration file changes on the fly, I have to export it as volumn since docker image can not retain the changes done on the fly
No, you don't run any risk of overwriting your configuration file. The configuration will be read from that file and kept in memory. ES also allows to persistently change settings at runtime, but they are stored in another global cluster state file (in data/CLUSTER_NAME/nodes/N/_state, where N is the 0-based node index) and re-read on each restart.
our Hadoop cluster shows the job tracker process eat up memory gradually that we have to restart the cluster every week. I searched around for the possible solution for this. one of the post mentioned to decrease 'mapred.jobtracker.completeuserjobs.maximum' to 5, so i checked mapred-site.xml under /hadoop-install/conf directory on name node and found there are two entries for that parameter, one set it to 30, the other set it to 5, when i goto any of the data node and check mapred-site.xml, i don't find the setting for that parameter at all. however when I checked running job on M/R administration page and checked their job file, it showed that parameters set to 100. I'm really confused where does this parameter is set. and if i updated it, do i need to restart the cluster? we are running apache Hadoop 1.2.1 on google cloud
Hadoop does not automatically copy the configuration files from your driver machine to all of the cluster machines You need to do that via scp and/or rsync or preferably an automated deployment tool like chef, ansible, puppet, etc.
As far as the individual job parameters: you can actually set them on per-job basis by using -D:
<path to jar>/myHadoopJobJar.jar -Dmapred.jobtracker.completeuserjobs.maximum=5
How can I change the number of mappers/reducers in Hadoop? For some odd reason, mapred.tasttracker.map.tasks.maximum and mapred.tasttracker.reduce.tasks.maximum are not present in the mapred-site.xml. I did manage to find these settings in dse-mapred-default.xml but once the xml is opened, there's a note which indicates that the settings shouldn't be edited in this file and that the properties should be overridden in mapred-site.xml.
I have tried adding the two settings to the mapred-site.xml and restarting Hadoop and i was expecting the numbers to also be updated in dse-mapred-default.xml but with no luck.
Could someone please shed some light on this?
Thanks
Majd
It's not mapred.tasttracker.map.tasks.maximum, but mapred.tasktracker.map.tasks.maximum. I hope it is only a typo and you used correct names in your config.
On startup DSE creates dse-mapred-default.xml and dse-core-default.xml files and fills them with defaults adapted to your local OS configuration and hardware. This is mostly for Hadoop autotuning feature and for simplifying configuring of security-enabled Hadoop. Then Hadoop loads config files in the following order:
Hadoop internal defaults (the defaults you can find in the Hadoop docs)
DSE defaults from dse-core-default.xml and dse-mapred-default.xml
User files: core-site.xml and mapred-site.xml.
Settings from files loaded later override settings loaded earlier. The final state of configuration is not written back to the files with defaults. You should not expect settings from mapred-site.xml to be copied into dse-mapred-default.xml files.
If you're unsure what is the final configuration and whether your settings are properly set - just run a job and look into hadoop log directory and search for files matching pattern job_xxxxxxxxxxxx_xxxx_conf.xml, where x is a digit. You can also view the final config in the jobtracker HTTP console.