Cannot create directory in HDFS. Name node is in safe mode - hadoop

I'm have deployed Hadoop in docker which is running on aws ec2 ubuntu AMI instance.
when I try to create a directory in hdfs it says Cannot create directory. Name node is in safe mode
below are the properties in hdfs-site.xml
name: dfs.replication
value: 1
name: dfs.namenode.name.dir
value: /usr/local/hadoop/data
when I check the hdfs report, it gives the below output.
bash-4.1# hdfs dfsadmin -report
19/01/05 12:34:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Safe mode is ON
Configured Capacity: 0 (0 B)
Present Capacity: 335872 (328 KB)
DFS Remaining: 0 (0 B)
DFS Used: 335872 (328 KB)
DFS Used%: 100.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
below is some detail about namenode.
bash-4.1# hdfs dfs -df
19/01/05 12:37:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Filesystem Size Used Available Use%
hdfs://0cd4da30c603:9000 0 335872 0 Infinity%
if I set to leave safe mode, within secs it's going back to safe mode.
bash-4.1# hdfs dfsadmin -safemode leave
19/01/05 12:42:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Safe mode is OFF
bash-4.1# hdfs dfsadmin -safemode get
19/01/05 12:42:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Safe mode is ON
below is my file system information
bash-4.1# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 25G 6.2G 19G 26% /
tmpfs 64M 0 64M 0% /dev
tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/xvda1 25G 6.2G 19G 26% /data/lab
/dev/xvda1 25G 6.2G 19G 26% /etc/resolv.conf
/dev/xvda1 25G 6.2G 19G 26% /etc/hostname
/dev/xvda1 25G 6.2G 19G 26% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 492M 0 492M 0% /proc/acpi
tmpfs 64M 0 64M 0% /proc/kcore
tmpfs 64M 0 64M 0% /proc/keys
tmpfs 64M 0 64M 0% /proc/timer_list
tmpfs 64M 0 64M 0% /proc/sched_debug
tmpfs 492M 0 492M 0% /proc/scsi
tmpfs 492M 0 492M 0% /sys/firmware
what I'm expecting is to create a directory in hdfs to perform a MapReduce operation

Related

gcp console, trying to build a project and gives no disp space error

I'm trying to build a golang project on my gcp console. However when I run the build command I get the following error no space left on device, when I run the df command I see the following.
Filesystem Size Used Avail Use% Mounted on
overlay 60G 44G 17G 73% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sda1 60G 44G 17G 73% /root
/dev/disk/by-id/google-home-part1 4.8G 4.5G 38M 100% /home
/dev/root 2.0G 1.2G 820M 59% /lib/modules
shm 64M 0 64M 0% /dev/shm
tmpfs 7.9G 940K 7.9G 1% /google/host/var/run
I'm not sure how to interpret this, and is there something I can do to free up some disk space? I assume this is the location /dev/disk/by-id/google-home-part1? I am trying to build the go project in the /home/user1/gopath directory.
The df command is showing that you have the /home directory full. You must free some space to use it.
Filesystem Use% Mounted on
/dev/disk/by-id/google-home-part1 100% /home

Can not install ruby 2.5.1 by rbenv on ubuntu vps

I try to install ruby 2.5.1 on my vps Ubuntu 18.04, when I run command rbenv install 2.5.1. It show an error
/home/bet/.rbenv/plugins/ruby-build/bin/ruby-build: line 1295: cannot create temp file for here-document: No space left on device
mktemp: failed to create directory via template ‘/tmp/ruby-build.20200313131832.28894.XXXXXX’: No space left on device
mkdir: cannot create directory ‘’: No such file or directory
BUILD FAILED (Ubuntu 18.04 using ruby-build 20200224-6-gd8019fe)
Inspect or clean up the working tree at
I try to check disk space by command df -h. It show
Filesystem Size Used Avail Use% Mounted on
udev 463M 0 463M 0% /dev
tmpfs 99M 1.2M 98M 2% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 3.9G 3.8G 0 100% /
tmpfs 493M 0 493M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 493M 0 493M 0% /sys/fs/cgroup
/dev/sda2 976M 145M 806M 16% /boot
tmpfs 99M 0 99M 0% /run/user/1000
/dev/loop2 92M 92M 0 100% /snap/core/8592
tmpfs 99M 0 99M 0% /run/user/1001
/dev/loop0 92M 92M 0 100% /snap/core/8689
tmpfs 99M 0 99M 0% /run/user/1002
So anyone can help me. Thank you

Trying to install manjaro and have a problem with allocating of the disk free space

I'm trying to install manjaro on laptop with oem windows 10 on board, I do it using
manjaro architect cli installer. I've created LVM, parted logical volumes, made luks on LVM, formatted partitions to btrfs and mounted it. When I moved next and began installing DE I caught the error:
checking available disk space...
error: Partition /mnt too full: 1333620 blocks needed, 0 blocks free
error: not enough free disk space
error: failed to commit transaction (not enough free disk space)
Errors occurred, no packages were upgraded.
==> ERROR: Failed to install packages to new root
Then I type df -hT and see this:
Filesystem Type Size Used Avail Use% Mounted on
dev devtmpfs 6.8G 0 6.8G 0% /dev
run tmpfs 6.9G 101M 6.8G 2% /run
/dev/sdb1 iso9660 2.7G 2.7G 0 100% /run/miso/bootmnt
cowspace tmpfs 256M 0 256M 0% /run/miso/cowspace
overlay_root tmpfs 11G 189M 11G 2% /run/miso/overlay_root
/dev/loop0 squashfs 21M 21M 0 100% /run/miso/sfs/livefs
/dev/loop1 squashfs 457M 457M 0 100% /run/miso/sfs/mhwdfs
/dev/loop2 squashfs 1.6G 1.6G 0 100% /run/miso/sfs/desktopfs
/dev/loop3 squashfs 592M 592M 0 100% /run/miso/sfs/rootfs
overlay overlay 11G 189M 11G 2% /
tmpfs tmpfs 6.9G 121M 6.8G 2% /dev/shm
tmpfs tmpfs 6.9G 0 6.9G 0% /sys/fs/cgroup
tmpfs tmpfs 6.9G 48M 6.8G 1% /tmp
tmpfs tmpfs 6.9G 2.3M 6.9G 1% /etc/pacman.d/gnupg
tmpfs tmpfs 1.4G 12K 1.4G 1% /run/user/1000
/dev/mapper/vg--default-lv--root btrfs 162G 1.1G 0 100% /mnt
/dev/mapper/crypto-home btrfs 60G 3.4M 60G 1% /mnt/home
/dev/mapper/crypto-project btrfs 40G 3.4M 40G 1% /mnt/home/project
/dev/nvme0n1p4 fuseblk 200G 20G 181G 10% /mnt/windows
/dev/nvme0n1p7 vfat 99M 512 99M 1% /mnt/boot/efi
What is wrong with the row?
/dev/mapper/vg--default-lv--root btrfs 162G 1.1G 0 100% /mnt
How can it be so - Use% 100% and Avail 0, but only 1.1G is used from 162G?
The solution is to manually unmount and remount the filesystem before installing packages. Possibly, closing and reopening the btrfs volume is also necessary. If you run into the issue while packages are being installed, you can do the mount/remount and then just redo the package installation step without formatting.
Previously I wrote the following, however it did not help in a recent install:
I ran into the same issue. This is some sort of bug in btrfs. I stumbled upon a workaround. After manually creating a file and writing to it (touch /mnt/temp, dd if=/dev/zero of=/mnt/temp bs=1M count=1000), df began to report correct available space and I was able to resume the installation.
P.S. I am using btrfs directly over luks over block device.

Mapreduce job is getting stuck

Am running map reduce program, But its getting stuck.
19/09/16 09:35:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/09/16 09:35:04 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
19/09/16 09:35:05 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
19/09/16 09:35:05 INFO input.FileInputFormat: Total input files to process : 1
19/09/16 09:35:06 INFO mapreduce.JobSubmitter: number of splits:1
19/09/16 09:35:06 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
19/09/16 09:35:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1568605566346_0002
19/09/16 09:35:07 INFO impl.YarnClientImpl: Submitted application application_1568605566346_0002
19/09/16 09:35:07 INFO mapreduce.Job: The url to track the job: http://ec2-18-222-170-204.us-east-2.compute.amazonaws.com:8088/proxy/application_1568605566346_0002/
19/09/16 09:35:07 INFO mapreduce.Job: Running job: job_1568605566346_0002
Here is my disk avaiability
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 786M 9.5M 776M 2% /run
/dev/sda3 184G 12G 163G 7% /
tmpfs 3.9G 138M 3.7G 4% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda5 125G 21G 98G 18% /home
cgmfs 100K 0 100K 0% /run/cgmanager/fs
tmpfs 786M 60K 786M 1% /run/user/1000
tmpfs 786M 0 786M 0% /run/user/1001
May I Know please, what going wrong ..? Its just only a single node hadoop cluster.
Thanks

EC2 instance /var wiped out after stop/start

I'm using OpsWorks to manage my application instances. I have one Load Based EBS-backed instance that starts when load is high and stops when it's low. However, after instance is stopped and started again contents of /var is completely removed. df -h shows:
/dev/xvda1 7.9G 2.2G 5.4G 29% /
udev 3.7G 12K 3.7G 1% /dev
tmpfs 1.5G 188K 1.5G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.7G 0 3.7G 0% /run/shm
/dev/xvdb 414G 199M 393G 1% /mnt
so /var is not on ephemeral storage. Why?
OK, it seems that one of opsworks default recipes is mounting something (/mnt/var/www) to this folder. Solution was to use some other folder instead /var/www.

Resources