I am a fresher to Hadoop. I have to find the trend of symbols traded among users.
I have 2 machines b040n10 and b040n11. The files in the machine are as mentioned below:
b040n10:/u/ssekar>ls -lrt
-rw-r--r-- 1 root root 482342353 Feb 8 2014 A.log
-rw-r--r-- 1 root root 481231231 Feb 8 2014 B.log
b040n11:/u/ssekar>ls -lrt
-rw-r--r-- 1 root root 412312312 Feb 8 2014 C.log
-rw-r--r-- 1 root root 412356315 Feb 8 2014 D.log
There is a field called "symbol_name" on all these logs (example below).
IP=145.45.34.2;***symbol_name=ABC;***timestamp=12:13:05
IP=145.45.34.2;***symbol_name=XYZ;***timestamp=12:13:56
IP=145.45.34.2;***symbol_name=ABC;***timestamp=12:14:56
I have Hadoop running on my Laptop and I have 2 machines connected to my Laptop (can be used as Datanodes).
My task now is to get the list of symbol_name and the Symbol count.
As mentioned below:
ABC - 2
XYZ - 1
Should I now:
1. copy all the files (A.log,B.log,C.log,D.log) from b040n10 and b040n11 to my Laptop,
2. Issue a copyFromLocal command to HDFS system and analyze the data?
or is there a better way to findout the symbol_name and count without copying these files to my laptop?
The question is a basic one, but I am new to Hadoop, please help me to understand and use Hadoop to better. Please let me know if more information on the question is need.
Thanks
Copying the files from Hadoop to your local laptop defies the entire purpose of Hadoop which is to move the processing to the data not the other way. Because when you really have "BigData", you won't be able to move the data around to process it locally.
Your problem is a typical case of Map/Reduce, all what you need is a job that counts the occurrence of each symbol. Just search for Map/Reduce WordCount example and adapt it to your case
Related
Running the below code downloads a model - does anyone know what folder it downloads it to?
!pip install -q transformers
from transformers import pipeline
model = pipeline('fill-mask')
Update 2021-03-11: The cache location has now changed, and is located in ~/.cache/huggingface/transformers, as it is also detailed in the answer by #victorx.
This post should shed some light on it (plus some investigation of my own, since it is already a bit older).
As mentioned, the default location in a Linux system is ~/.cache/torch/transformers/ (I'm using transformers v 2.7, currently, but it is unlikely to change anytime soon.). The cryptic folder names in this directory seemingly correspond to the Amazon S3 hashes.
Also note that the pipeline tasks are just a "rerouting" to other models. To know which one you are currently loading, see here. For your specific model, pipeline(fill-mask) actually utilizes a distillroberta-base model.
As of Transformers version 4.3, the cache location has been changed.
The exact place is defined in this code section https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L181-L187
On Linux, it is at ~/.cache/huggingface/transformers.
The file names there are basically SHA hashes of the original URLs from which the files are downloaded. The corresponding json files can help you figure out what are the original file names.
On windows 10, replace ~ with C:\Users\username or in cmd do cd /d "%HOMEDRIVE%%HOMEPATH%".
So full path will be: C:\Users\username\.cache\huggingface\transformers
As of transformers 4.22, the path appears to be (tested on CentOS):
~/.cache/huggingface/hub/
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="sentence-transformers/all-MiniLM-L6-v2", filename="config.json")
ls -lrth ~/.cache/huggingface/hub/models--sentence-transformers--all-MiniLM-L6-v2/snapshots/7dbbc90392e2f80f3d3c277d6e90027e55de9125/
total 4.0K
lrwxrwxrwx 1 alex alex 52 Jan 25 12:15 config.json -> ../../blobs/72b987fd805cfa2b58c4c8c952b274a11bfd5a00
lrwxrwxrwx 1 alex alex 76 Jan 25 12:15 pytorch_model.bin -> ../../blobs/c3a85f238711653950f6a79ece63eb0ea93d76f6a6284be04019c53733baf256
lrwxrwxrwx 1 alex alex 52 Jan 25 12:30 vocab.txt -> ../../blobs/fb140275c155a9c7c5a3b3e0e77a9e839594a938
lrwxrwxrwx 1 alex alex 52 Jan 25 12:30 special_tokens_map.json -> ../../blobs/e7b0375001f109a6b8873d756ad4f7bbb15fbaa5
lrwxrwxrwx 1 alex alex 52 Jan 25 12:30 tokenizer_config.json -> ../../blobs/c79f2b6a0cea6f4b564fed1938984bace9d30ff0
Seeing nonsense values for user names in folder permissions for NFS mounted HDFS locations, while the HDFS locations themselves (using Hortonworks HDP 3.1) appear fine. Eg.
➜ ~ ls -lh /nfs_mount_root/user
total 6.5K
drwx------. 3 accumulo hdfs 96 Jul 19 13:53 accumulo
drwxr-xr-x. 3 92668751 hadoop 96 Jul 25 15:17 admin
drwxrwx---. 3 ambari-qa hdfs 96 Jul 19 13:54 ambari-qa
drwxr-xr-x. 3 druid hadoop 96 Jul 19 13:53 druid
drwxr-xr-x. 2 hbase hdfs 64 Jul 19 13:50 hbase
drwx------. 5 hdfs hdfs 160 Aug 26 10:41 hdfs
drwxr-xr-x. 4 hive hdfs 128 Aug 26 10:24 hive
drwxr-xr-x. 5 h_etl hdfs 160 Aug 9 14:54 h_etl
drwxr-xr-x. 3 108146 hdfs 96 Aug 1 15:43 ml1
drwxrwxr-x. 3 oozie hdfs 96 Jul 19 13:56 oozie
drwxr-xr-x. 3 882121447 hdfs 96 Aug 5 10:56 q_etl
drwxrwxr-x. 2 spark hdfs 64 Jul 19 13:57 spark
drwxr-xr-x. 6 zeppelin hdfs 192 Aug 23 15:45 zeppelin
➜ ~ hadoop fs -ls /user
Found 13 items
drwx------ - accumulo hdfs 0 2019-07-19 13:53 /user/accumulo
drwxr-xr-x - admin hadoop 0 2019-07-25 15:17 /user/admin
drwxrwx--- - ambari-qa hdfs 0 2019-07-19 13:54 /user/ambari-qa
drwxr-xr-x - druid hadoop 0 2019-07-19 13:53 /user/druid
drwxr-xr-x - hbase hdfs 0 2019-07-19 13:50 /user/hbase
drwx------ - hdfs hdfs 0 2019-08-26 10:41 /user/hdfs
drwxr-xr-x - hive hdfs 0 2019-08-26 10:24 /user/hive
drwxr-xr-x - h_etl hdfs 0 2019-08-09 14:54 /user/h_etl
drwxr-xr-x - ml1 hdfs 0 2019-08-01 15:43 /user/ml1
drwxrwxr-x - oozie hdfs 0 2019-07-19 13:56 /user/oozie
drwxr-xr-x - q_etl hdfs 0 2019-08-05 10:56 /user/q_etl
drwxrwxr-x - spark hdfs 0 2019-07-19 13:57 /user/spark
drwxr-xr-x - zeppelin hdfs 0 2019-08-23 15:45 /user/zeppelin
Notice the difference for users ml1 and q_etl that they have numerical user values when running ls on the NFS locations, rather then their user names.
Even doing something like...
[hdfs#HW04 ml1]$ hadoop fs -chown ml1 /user/ml1
does not change the NFS permissions. Even more annoying, when trying to change the NFS mount permissions as root, we see
[root#HW04 ml1]# chown ml1 /nfs_mount_root/user/ml1
chown: changing ownership of ‘/nfs_mount_root/user/ml1’: Permission denied
This causes real problems, since the differing uid means that I can't access these dirs even as the "correct" user to write to them. Not sure what to make of this. Anyone with more Hadoop experience have any debugging suggestions or fixes?
UPDATE:
Doing a bit more testing / debugging, found that the rules appear to be...
If the NFS server node has no uid (or gid?) that matches the uid of the user on the node accessing the NFS mount, we get the weird uid values as seen here.
If there is a uid associated to the username of the user on the requesting node, then that is the uid user that we see assigned to the location when accessing via NFS (even if that uid on the NFS server node is not actually for the requesting user), eg.
[root#HW01 ~]# clush -ab id ml1
---------------
HW[01,04] (2)
---------------
uid=1025(ml1) gid=1025(ml1) groups=1025(ml1)
---------------
HW[02-03] (2)
---------------
uid=1027(ml1) gid=1027(ml1) groups=1027(ml1)
---------------
HW05
---------------
uid=1026(ml1) gid=1026(ml1) groups=1026(ml1)
[root#HW01 ~]# exit
logout
Connection to hw01 closed.
➜ ~ ls -lh /hdpnfs/user
total 6.5K
...
drwxr-xr-x. 6 atlas hdfs 192 Aug 27 12:04 ml1
...
➜ ~ hadoop fs -ls /user
Found 13 items
...
drwxr-xr-x - ml1 hdfs 0 2019-08-27 12:04 /user/ml1
...
[root#HW01 ~]# clush -ab id atlas
---------------
HW[01,04] (2)
---------------
uid=1027(atlas) gid=1005(hadoop) groups=1005(hadoop)
---------------
HW[02-03] (2)
---------------
uid=1024(atlas) gid=1005(hadoop) groups=1005(hadoop)
---------------
HW05
---------------
uid=1005(atlas) gid=1006(hadoop) groups=1006(hadoop)
If wondering why I have, user on the cluster that have varying uids across the cluster nodes, see the problem posted here: How to properly change uid for HDP / ambari-created user? (note that these odd uid setting for hadoop service users was set up by Ambari by default).
After talking with someone more knowledgeable in HDP hadoop, found that the problem is that when Ambari was setup and run to initially install the hadoop cluster, there may have been other preexisting users on the designated cluster nodes.
Ambari creates its various service users by giving them the next available UID of a nodes available block of user UIDs. However, prior to installing Ambari and HDP on the nodes, I created some users on the to-be namenode (and others) in order to do some initial maintenance checks and tests. I should have just done this as root. Adding these extra users offset the UID counter on those nodes and so as Ambari created users on the nodes and incremented the UIDs, it was starting from different starting counter values. Thus, the UIDs did not sync and caused problems with HDFS NFS.
To fix this, I...
Used Ambari to stop all running HDP services
Go to Service Accounts in Ambari and copy all of the expected service users name strings
For each user, run something like id <service username> to get the group(s) for each user. For service groups (which may have multiple members), can do something like grep 'group-name-here' /etc/group. I recommend doing it this way as the Ambari docs of default users and groups does not have some of the info that you can get here.
Use userdel and groupdel to remove all the Ambari service users and groups
Then recreate all the groups across the cluster
Then recreate all the users across the cluster (may need to specify UID if nodes have other users not on others)
Restart the HDP services (hopefully everything should still run as if nothing happend, since HDP should be looking for the literal string (not the UIDs))
For the last parts, can use something like clustershell, eg.
# remove user
$ clush -ab userdel <service username>
# check that the UID you want to use is actually available on all nodes
$ clush -ab id <some specific UID you want to use>
# assign that UID to a new service user
$ clush -ab useradd --uid <the specific UID> --gid <groupname> <service username>
To get the lowest common available UID from each node, used...
# for UID
getent passwd | awk -F: '($3>1000) && ($3<10000) && ($3>maxuid) { maxuid=$3; } END { print maxuid+1; }'
# for GID
getent passwd | awk -F: '($4>1000) && ($4<10000) && ($4>maxuid) { maxuid=$4; } END { print maxuid+1; }'
Ambari also creates some /home dirs for users. Once you are done recreating the users, will need to change the permissions for the dirs (can also use something like clush there as well).
* Note that this was a huge pain and you would need to manually correct the UIDs of users whenever you added another cluster node. I did this for a test cluster, but for production (or even a larger test) you should just useKerberos or SSSD + Active Directory.
I'm on OS X 10.9.4 and trying to use newsyslog to rotate my app development log files.
More specifically, I want to rotate the files daily but only if they are not empty (newsyslog writes one or two lines to every logfile it rotates, so let's say I only want to rotate logs that are at least 1kb).
I created a file /etc/newsyslog.d/code.conf:
# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/Users/manuel/code/**/log/*.log manuel:staff 644 7 1 $D0 GN
The way I understand the man page for the configuration file is that size and when conditions should work in combination, so logfiles should be rotated every night at midnight only if they are 1kb or larger.
Unfortunately this is not what happens. The log files are rotated every night, no matter if they only the rotation message from newsyslog or anything else:
~/code/myapp/log (master) $ ls
total 32
drwxr-xr-x 6 manuel staff 204B Aug 8 00:17 .
drwxr-xr-x 22 manuel staff 748B Jul 25 14:56 ..
-rw-r--r-- 1 manuel staff 64B Aug 8 00:17 development.log
-rw-r--r-- 1 manuel staff 153B Aug 8 00:17 development.log.0
~/code/myapp/log (master) $ cat development.log
Aug 8 00:17:41 localhost newsyslog[81858]: logfile turned over
~/code/myapp/log (master) $ cat development.log.0
Aug 7 00:45:17 Manuels-MacBook-Pro newsyslog[34434]: logfile turned over due to size>1K
Aug 8 00:17:41 localhost newsyslog[81858]: logfile turned over
Any tips on how to get this working would be appreciated!
What you're looking for (rotate files daily unless they haven't logged anything) isn't possible using newsyslog. The man page you referenced doesn't say anything about size and when being combined other than to say that if when isn't specified, than it is as-if only size was specified. The reality is that the log is rotated when either condition is met. If the utility is like its FreeBSD counterpart, it won't rotate logs less than 512 bytes in size unless the binary flag is set.
MacOS' newer replacement for newsyslog, ASL, also doesn't have the behavior you desire. As far as I know, the only utility which has this is logrotate using its notifempty configuration option. You can install logrotate on your Mac using Homebrew
Since making a hash of a complete binary file would be far too heavy to compute in a reasonably fast time:
What are sufficient file information for hashing a file? The following lists what properties the resulting hashing should ideally have:
collision-free in respect to other files in the directory
fast
catch all file changes
As a rule of thumb, the less information we can use to create enough entropy, the better. Since the speed of retrieval for specific information may depend largely on the given environment (OS, file-IO of the language, IO of the used library, etc.), it should be disregarded here.
(This is my first attempt at a community wiki. My reason for making it one is that the information asked here is very generic but (hopefully) informative. I also would like this question to be marked as a community wiki, so it can be improved where fit. )
General Overview
Our goal here is to track as much differences between two file states while not using redundant data. Thus each informational source must be a disjointed subset of the information of the files state.
The following items represent sources of information about a file:
the name of the file
the directory-path relative to the specified document-root (aka absolute from document-root)
the files permissions
the files owner (user/group)
the last change time
the size of the file
the hostname of the machine the file resides on
the actual saved binary data
Per Item Considerations
Name of File
The name of the file is part of its absolute filesystem's path (the last bit) and as #La-comadreja said, it is unique in that no two files on a system can have the same absolute path. Using the File's name in combination with the rest of its absolute path (see directory-path for more information) is highly encouraged to avoid hash collisions with other files.
Directory-Path
While the files absolute path will be perfectly unique, it should be noted that in certain circumstances hashing the absolute path may be inappropriate. For instance, comparing the hashes of two files on different machines will most likely fail when both files do not have the identical absolute path on both machines. This becomes even more problematic on machines with different OS's and/or architectures. It is therefore encouraged to specify a document-root and resolve an absolute path from there.
Permissions
If you want to track changes to a files permissions, the tests below indicate that you would need to incoporate them into your hash directly as they do not change any other information about the file (most notably the timestamp). Note however that permissions are handled quite differently on different machines, so caution must be exercised here (for instance to use a canonical permission translation scheme).
Ownership
Ownership, just as permissions, is handled very differently across architectures and filesystems. A change of ownership does not change other information (as indicated by the tests below).
timestamp
The timestamp of a file is also something that is not unifiedly implemented across all (or at least the most common) systems. First of all, there are different timestamps on different filesystems we could be looking at: creation date, modified date, access date, etc. For our purpose the modified date is most suitable, as it is supported by most of the available filesystems [1] and holds the exact information we need: the last change to a file. However comparing files across different OS's may pose a problem, as Windows and Unix handle timestamps (in general) differently (see here [2] for a detailed article about the problem). Note that the modification date of a file changes whenever a file has been edited (disregarding edge cases), so timestamp indicates changes in file size (note that the opposite does not hold true, see file-size).
File size
The file size in bytes is an extremely good indication whether a file has been edited (except for permissions, ownership and name changes), as each edit would change the files content, thus changing its size. However this does not hold true if additions to a file are exactly as big as deletions. Thus the files timestamp may be a better indicator. Also, calculating a files binary size may be quite computation intensive.
Hostname
If one wants to compare files across multiple hosts and regard identical files on different hosts as different, then the hostname of the machine (or another suitable unique identifier for the host) should be included in the hash.
Binary Data
The binary data of the file has, of course, all necessary information to check if a file was changed. However, it is also too resource intensive to be of any practicability. It i highly discouraged to use this information.
Suggestions
The following sources should be used to compare files:
the name of the file
the directory path
the timestamp (see above for problems)
The following extra sources can be used to track more information:
permissions (see above)
ownership (see above)
hostname (when comparing across different machines)
The following sources of information should be disregarded:
file size
binary data
Tests
I did some tests on Debian checking whether changing one information would change another. Most interestingly rename, permission change, owner change did not affect a timestamp change or filesize change. (Note that these tests are currently only tested on Debian Linux. Other OS's will likely behave differently.)
$ ls -l
-rw-r--r-- 1 alex alex 30 Apr 26 11:04 bar
-rw-r--r-- 1 alex alex 0 Apr 26 11:03 baz
-rw-r--r-- 1 alex alex 14 Apr 26 11:04 foo
$ mv baz baz2
$ ls -l
-rw-r--r-- 1 alex alex 30 Apr 26 11:04 bar
-rw-r--r-- 1 alex alex 0 Apr 26 11:03 baz2
-rw-r--r-- 1 alex alex 14 Apr 26 11:04 foo
$ chmod 777 foo
$ ls -l
-rw-r--r-- 1 alex alex 30 Apr 26 11:04 bar
-rw-r--r-- 1 alex alex 0 Apr 26 11:03 baz2
-rwxrwxrwx 1 alex alex 14 Apr 26 11:04 foo
$ mv baz2 baz
$ echo "Another string" >> bar
$ ls -l
-rw-r--r-- 1 alex alex 45 Apr 26 11:17 bar
-rw-r--r-- 1 alex alex 0 Apr 26 11:03 baz
-rwxrwxrwx 1 alex alex 14 Apr 26 11:04 foo
$ sudo chown root baz
$ ls -l
-rw-r--r-- 1 alex alex 45 Apr 26 11:17 bar
-rw-r--r-- 1 root alex 0 Apr 26 11:03 baz
-rwxrwxrwx 1 alex alex 14 Apr 26 11:04 foo
Assuming all the files are on the same machine, directory path and file name should produce a unique combination because two files in the same directory cannot have the same name. Directory path, filename and timestamp of last change should capture each change.
If the files are on different machines, the machine name should be included in the directory path.
I have a local file
-rw-r--r-- 1 me developers 102445154 Oct 22 10:02 file1.csv
which I'm attempting to put to hdfs:
/usr/bin/hdfs dfs -put ./file1.csv hdfs://000.00.00.00/user/me/
which works fine, but the group is wrong
-rw-r--r-- 3 me me 102445154 2013-10-22 10:23 hdfs://000.00.00.00/user/file1.csv
How do I get the group developers to come with?
Use the chgrp option on the file.