Could HDFS orphan files in datanode? - hadoop

During a routine log pruning job where logs older than 60 days were being removed, a system administrator upgraded CDH from 4.3 to 4.6, (I know, I know)...
Normally, the log pruning job frees about 40% of HDFS's available storage. However, during the upgrade, datanodes went down, were rebooted, and all sorts of madness.
What's known is that HDFS received the delete commands, since the HDFS files / folders no longer exist, but disk utilization is still unchanged.
My question is, could HDFS have removed the files from the NameNode's metadata without actually fulfilling the file block deletes among the DataNodes, effectively orphaning the file blocks?

I think the namenode tells the datanodes to delete orphaned blocks, once it gets their report on the blocks they hold and it notices that some of them don't belong to any file.
If you don't want these blocks to be deleted, you can put the system in safemode and try to manually look through the disks and copy the data. There is no automated way of doing this, but a tool to list orphaned blocks may be added in the future (as suggested in this JIRA).
Additionally, you can try to check the health of the namesystem using Hadoop's fsck.

Related

Osquery takes too much space

I got some osquery on mac os and there is a file /private/var/log/osquery/osquery-output.log. This file takes almost 16 Gb of disk space. What is it? Can i delete it safely?
By itself, osquery does very little. It can be configured to run a variety of queries to examine system state. Depending on configuration, these results might be stored locally or sent to a log aggregator. The configuration can either be from a local file, or from a remote server.
It sounds like you have an osquery install that is configured to log to local disk, but nothing is collecting those results.
osquery itself does not do anything with that file. So you can certainly truncate it. (Just deleting it will likely leave an unlinked file). But that file implies a misconfigured setup.
Should it be logging to local disk? What consumes those logs? Etc.

5.6 GB not enough for Cloudera?

I am running Cloudera Hadoop on my laptop and Oracle VirtualBox VM.
I have given 5.6 GB out of mine 8 and six from eight cores as well.
And still I am not able to keep it up and running.
Even without load services would not stay up and running and when I try a query at least Hive will be down within 20 minutes. And sometimes they go down like dominoes: one after another.
More memory seemed to help some: with 3GB and all services, Hue was blinking with red colors when the Hue itself managed to get up. And after rebooting it would takes 30 - 60 minutes before I manage to get the system up enough to even try running anything on it.
There has been two sensible notes (that I have managed to find):
- Warning of swapping.
- Crashing note when the system used 26 GB of virtual memory which was not enough.
My dataset is less than one megabyte, so it is hard to understand why the system would go up to dozens of gigabytes, but for whatever was reason for that has passed: now the system is running more steadily around the 5.6 GB that I have given to it after closing down a few services: see my answer to myself.
And still it is just more stable. Right after I got a warning of swapping and the Hive went down again. What could be reason for more-or-less all Hadoop services going down if the VM starts to swap?
I don't have enough reputation to post the picture to here, but when Hive went down again it was swapping 13 pages / second and utilizing 5.9 GB / 5.6 GB. So basically my system starts crashing more-or-less right after it start to swap. "428 pages were swapped to disk in the previous 15 minute(s)"
I have used default installation options as far as hard drive is concerned.
Only addition is a shared folder between Windows and VM. That works somewhat strangely locking files all the time, so I used it just like FTP and only for passing files from one system to another. Thus I can go days without using it, but systems still crash, so that is not the cause either.
Now that the system is mostly up, services crash still about twice a day: Service Monitor and Hive are quite even with their crashing frequency. After those come Activity Monitor and Event Server, which appear to crash always together. I believe Yarn crashes as well, but it gets up on its own. Last time Hive crashed first, and then it got followed by Service Monitor, Hive (second time), Activity Monitor and Event Server all.
As swap is disk, perhaps the problem is with disk:
# cat /etc/fstab
# swapoff -a
# badblocks -v /dev/VolGroup/lv_swap
Checking blocks 0 to 8388607
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found.
# badblocks -vw /dev/VolGroup/lv_swap
Checking for bad blocks in read-write mode
From block 0 to 8388607
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: done
Reading and comparing: done
Testing with pattern 0x00: done
Reading and comparing: done
Pass completed, 0 bad blocks found.
So nothing wrong with swap disk and I have not noticed any disk error anywhere else either.
Note that you could check file system from Windows side also. But I expect that if you make Windows to fix your Linux file system, you have good chances of destroying your Linux with that, so I did my checks somewhat pessimistically, because AFAIK these commands are safe to execute.
About half of the services kept going down, so giving more specifics would be a long story.
I succeeded to get the system more stable by closing down flume, hbase, impala, ks_indexer, oozie, spark and sqoop. And by increasing more memory to some remaining services that complained they had not been given enough memory.
Also I fixed couple of thing on the Windows side, I am not sure which one of these helped:
- MsMpEng.exe kept my hard drive busy. I didn't have permissions to kill it, but I decreased its priority to lowest possible.
- CcmExec.exe got to loop on my DVD and kept reading it for forever. This I solved by taking the DVD out from the drive. Then later on I killed the process tree to keep it from bothering for a while.
I found these using Windows resource manager.
The VM requires 4GB: http://www.cloudera.com/content/cloudera-content/cloudera-docs/DemoVMs/Cloudera-QuickStart-VM/cloudera_quickstart_vm.html You should use that.
I am not clear whether you are using the QuickStart VM though. It's set up to run just the essential services and tuned to conserve memory rather than exploit lots of memory.
It sounds like you are running your own installation, on one virtual machine, on your Windows machine. You may be running an entire cluster's worth of services on one desktop machine. Each of these services has master, worker processes, monitoring processes, etc. You don't need most of them.
You also probably have left memory settings at default suitable for a server-class machine of 16+ GB RAM. Remember these services usually run across many machines, not all on one.
Finally, you're clearly swapping, and that makes things incredibly slow. Remember this is all through a VM too!
Bottom line, use the QuickStart VM if you really want a 1-machine cluster tuned correctly. If you want a real cluster or more services, you need more hardware.
Also consider: cloudera.com/live contains a full CDH 5.1 cluster + sample data, running on demand on AWS. Of course, the advantage of the VM is that you can BYOD, but if you're simply looking for a hands-on Hadoop experience, Live is a great option.

Redis's huge files won't delete?

I'm using Redis-server for windows ( 2.8.4 - MSOpenTech) / windows 8 64bit.
It is working great , but even after I run :
I see this : (and here are my questions)
When Redis-server.exe is up , I see 3 large files :
When Redis-server.exe is down , I see 2 large files :
Question :
— Didn't I just tell it to erase all DB ? so why are those 2/3 huge files are still there ?
How can I completely erase those files? ( without re-generating)
NB
It seems that it is doing deletion of keys without freeing occupied space. if so , How can I free this unused space?
From https://github.com/MSOpenTech/redis/issues/83
"Redis uses the fork() UNIX system API to create a point-in-time snapshot of the data store for storage to disk. This impacts several features on Redis: AOF/RDB backup, master-slave synchronization, and clustering. Windows does not have a fork-like API available, so we have had to simulate this behavior by placing the Redis heap in a memory mapped file that can be shared with a child(quasi-forked) process. By default we set the size of this file to be equal to the size of physical memory. In order to control the size of this file we have added a maxheap flag. See the Redis.Windows.conf file in msvs\setups\documentation (also included with the NuGet and Chocolatey distributions) for details on the usage of this flag. "
I know this is an old thread, but I am facing the same issues with the file sizes.
In case you have problems with your C ssd drive (like me), you can make a directory junction:
1) Stop redis service
2) Move C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis folder to another drive / location.
3) Open a command prompt in C:\Windows\ServiceProfiles\NetworkService\AppData\Local then execute:
mklink /J "C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Redis" "[newpath]"
PD: [newpath] must be absolute, like "D:\directory junctions\Redis"
4) Start redis service. Now the files are in another drive.
Check http://ss64.com/nt/mklink.html if doubts regarding this command.
I faced this same issue on my development machine. It was resolved by stopping the redis service and I used WinDirStat (which is what I used to detect the issue originally) to permanently delete these files in appdata/local/redis.
I then started redis back up and things were working fine.
Before following this same procedure others may want to ensure that this data isn't needed. In my case it wasn't critical since this is my development workstation.
When you flush the DB you only flush the keys from memory. I'm not sure why you've got files of different names, it may be an artifact of the way the Windows port of Redis manages files, but Redis itself doesn't delete files when you remove keys. You will need to manage outdated files outside of Redis.

recover deleted data from hdfs

We have a Hadoop cluster v1.2.1. We deleted one of hdfs folders by mistake, but immediately, we shut down the cluster. Is there any way to get back our data?
Even if we can get back a part of our data, it would be better than none! As the data size was so much, most probably a little data has been removed.
thanks for your help.
This could be an easy fix if you have set the fs.trash.interval > 1. If this is true, HDFS's trash option is enabled, and your files should be located in the trash directory. By default, this directory is located at /user/X/.Trash.
Otherwise, your best option is probably to find and use a data recovery tool. Some quick Googling turned up this cross-platform tool available under GNU licensing that runs from the terminal: http://www.cgsecurity.org/wiki/PhotoRec. It works on many different types of file systems, and it's possible it may work for HDFS.

Hadoop is sometimes too slow (stuck at 100%)

I set up a cluster of ten machines in which I installed CDH4 (yarn).
I run the nameNode, the resourceManger and the historyServer in the same node, and the client in another node.
On the rest of machines, I turned on dataNode and NodeManager.
I launched my application on a 100GBytes file, it worked at first and it was relatively quick, but now it gets really really slow at the end of the map (around 90% 100% it takes 30 minutes).
I don't know if the problem comes from the way I coded the program or the way I configured cloudera CDH4.
The problem is that it works sometimes but does not work other times although I didn't change anything.
I found out why it took so much time at the end, in fact I thought that the command hadoop fs -expunge allows me to empty the trash but it doesn't, so when Hadoop tried to write in HDFS files the result it was very slow because there was a very little space left.

Resources