Hadoop streaming permission issues - hadoop

Need help with debugging permission issue during hadoop streaming. I try to start awk streaming:
// mkdir to all nodes
[pocal#oscbda01 ~]$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 ; do ssh -f oscbda$i mkdir -p /home/pocal/KS/comverse/awk/; done;
// copy streaming files to all nodes
[pocal#oscbda01 ~]$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 ; do scp * oscbda$i:/home/pocal/KS/comverse/awk/; done;
// give 777 permission to all files
[pocal#oscbda01 ~]$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 ; do ssh -f oscbda$i chmod 777 /home/pocal/KS/comverse/awk/*; done;
//start streaming
[pocal#oscbda01 ~]$ hadoop fs -rm -r /user/pocal/ks/comverse/one/out;\
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.0.0-cdh4.3.0.jar \
-Dmapreduce.job.reduces=0 \
-Dmapred.reduce.tasks=0 \
-mapper "awk -f /home/pocal/KS/comverse/awk/data_change.awk -f /home/pocal/KS/comverse/awk/selfcare.awk -f /home/pocal/KS/comverse/awk/selfcare_secondary_mapping.awk -f /home/pocal/KS/comverse/awk/out_sort.awk" \
-input "/user/pocal/ks/comverse/one/" \
-output "/user/pocal/ks/comverse/one/out"
And get error…
………..
attempt_201311041208_1379_m_000010_2: awk: fatal: can't open source file `/home/pocal/KS/comverse/awk/data_change.awk' for reading (Permission denied)
13/12/12 09:01:32 INFO mapred.JobClient: Task Id : attempt_201311041208_1379_m_000004_2, Status : FAILED
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
attempt_201311041208_1379_m_000004_2: awk: fatal: can't open source file `/home/pocal/KS/comverse/awk/data_change.awk' for reading (Permission denied)
13/12/12 09:01:33 INFO mapred.JobClient: Task Id : attempt_201311041208_1379_m_000003_2, Status : FAILED
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
attempt_201311041208_1379_m_000003_2: awk: fatal: can't open source file `/home/pocal/KS/comverse/awk/data_change.awk' for reading (Permission denied)
13/12/12 09:01:37 INFO mapred.JobClient: Job complete: job_201311041208_1379
13/12/12 09:01:37 INFO mapred.JobClient: Counters: 8
13/12/12 09:01:37 INFO mapred.JobClient: Job Counters
13/12/12 09:01:37 INFO mapred.JobClient: Failed map tasks=1
13/12/12 09:01:37 INFO mapred.JobClient: Launched map tasks=52
13/12/12 09:01:37 INFO mapred.JobClient: Data-local map tasks=12
13/12/12 09:01:37 INFO mapred.JobClient: Rack-local map tasks=40
13/12/12 09:01:37 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=348738
13/12/12 09:01:37 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=2952
13/12/12 09:01:37 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/12/12 09:01:37 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/12/12 09:01:37 ERROR streaming.StreamJob: Job not Successful!
Streaming Command Failed!
Check on one machine:
[pocal#oscbda01 ~]$ ssh oscbda10 ls -l /home/pocal/KS/comverse/awk/data_change.awk
-rwxrwxrwx 1 pocal pocal 1548 Dec 10 10:05 /home/pocal/KS/comverse/awk/data_change.awk
Permissions Ok…
Does anybody has any ideas?

was problem with parent directory:
[pocal#oscbda01 .ssh]$ ssh oscbda05 ls -l /home/|grep pocal
total 24
drwx------ 5 pocal pocal 4096 Dec 10 09:52 pocal
[pocal#oscbda01 .ssh]$ ssh oscbda05 ls -l /home/pocal
total 4
drwxrwxrwx 3 pocal pocal 4096 Dec 10 09:52 KS
[pocal#oscbda01 .ssh]$ ssh oscbda05 ls -l /home/pocal/KS
total 4
drwxrwxrwx 3 pocal pocal 4096 Dec 10 09:52 comverse
[pocal#oscbda01 .ssh]$ ssh oscbda05 ls -l /home/pocal/KS/comverse/
total 4
drwxrwxrwx 2 pocal pocal 4096 Dec 10 10:05 awk
[pocal#oscbda01 .ssh]$ ssh oscbda05 ls -l /home/pocal/KS/comverse/awk/
total 216
-rwxrwxrwx 1 pocal pocal 4398 Dec 10 10:05 calltype_checker.awk
-rwxrwxrwx 1 pocal pocal 16173 Dec 10 10:05 ch_rebuild.c
-rwxrwxrwx 1 pocal pocal 14643 Dec 10 10:05 ch_rebuild.dat
-rwxrwxrwx 1 pocal pocal 1548 Dec 10 10:05 data_change.awk
-rwxrwxrwx 1 pocal pocal 4080 Dec 10 10:05 decompress_incomming_data.sh
-rwxrwxrwx 1 pocal pocal 720 Dec 10 10:05 fms.awk
-rwxrwxrwx 1 pocal pocal 2502 Dec 10 10:05 load_func
-rwxrwxrwx 1 pocal pocal 1308 Dec 10 10:05 load_vars
-rwxrwxrwx 1 pocal pocal 199 Dec 10 10:05 load_vars_dynamic
-rwxrwxrwx 1 pocal pocal 1358 Dec 10 10:05 out.awk
-rwxrwxrwx 1 pocal pocal 1296 Dec 10 10:05 out_nosort.awk
-rwxrwxrwx 1 pocal pocal 1358 Dec 10 10:05 out_sort.awk
-rwxrwxrwx 1 pocal pocal 70041 Dec 10 10:05 selfcare.awk
-rwxrwxrwx 1 pocal pocal 54204 Dec 10 10:05 selfcare_secondary_mapping.awk
-rwxrwxrwx 1 pocal pocal 1847 Dec 10 10:05 stat.awk
Add permission:
[pocal#oscbda01 .ssh]$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 ; do ssh -f oscbda$i chmod +x /home/pocal; done;
Check:
[pocal#oscbda01 .ssh]$ ssh oscbda05 ls -l /home/|grep poc
drwx--x--x 5 pocal pocal 4096 Dec 10 09:52 pocal
And it works!

Related

Filebeat does not collecting logs?

Problem
Ubuntu 18.04. The logs for some files not sending. For example I have has 16 log files on 2020-06-23. But only #5 & #8 got collected into data.json. Others are not found in data.json
Here's a script I use to found files on disk but not in data.json.
sudo python -c '
import json; import os;
raw = os.listdir("/path/to/my/logdir")
f = open("/var/lib/filebeat/registry/filebeat/data.json", "r")
data=json.load(f)
harvested=[d["source"].split("/")[-1] for d in data]
substraction=[x for x in raw if not x in harvested]
print("\n".join(substraction))
'
The script result, there's just a lot:
app-2020-06-21.20.log
app-2020-06-21.25.log
app-2020-06-23.11.log
app-2020-06-22.1.log
app-2020-06-22.48.log
app-2020-06-21.41.log
app-2020-06-23.15.log
...
And there are only 2 types of logs End of file reached and Non-zero metrics in the last 30s :
Jun 23 12:23:12 filebeat[32738]: 2020-06-23T12:23:12.223Z DEBUG [harvester] log/log.go:107 End of file reached: /path/to/my/logdir/app-2020-06-21.43.log; Backoff now.
Jun 23 12:23:12 filebeat[32738]: 2020-06-23T12:23:12.344Z DEBUG [harvester] log/log.go:107 End of file reached: /path/to/my/logdir/app-2020-06-22.9.log; Backoff now.
Jun 23 12:23:12 filebeat[32738]: 2020-06-23T12:23:12.364Z DEBUG [harvester] log/log.go:107 End of file reached: /path/to/my/logdir/app-2020-06-21.34.log; Backoff now.
Jun 23 12:23:12 filebeat[32738]: 2020-06-23T12:23:12.444Z DEBUG [harvester] log/log.go:107 End of file reached: /path/to/my/logdir/app-2020-06-23.5.log; Backoff now.
Jun 23 12:23:15 filebeat[32738]: 2020-06-23T12:23:15.144Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":350,"time":{"ms":10}},"total":{"ticks":1710,"ti
Jun 23 12:23:45 filebeat[32738]: 2020-06-23T12:23:45.144Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":350,"time":{"ms":9}},"total":{"ticks":1720,"tim
Jun 23 12:24:15 filebeat[32738]: 2020-06-23T12:24:15.144Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":360,"time":{"ms":6}},"total":{"ticks":1740,"tim
Jun 23 12:24:45 filebeat[32738]: 2020-06-23T12:24:45.144Z
Details
Filebeat config
output:
logstash:
enabled: true
hosts:
- x.x.x.x:5044
filebeat:
inputs:
-
paths:
- "/path/to/log/dir/*"
document_type: myapp
multiline.pattern: '^[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}'
multiline.negate: true
multiline.match: after
clean_removed: true
close_removed: true
logging.level: debug
name: "myapp"
tags: ["tag1", "tag2"]
Disk
Here's is the usage. I guess disk is fine
Filesystem Size Used Avail Use% Mounted on
... 2.0G 0 2.0G 0% /dev
... 395M 820K 394M 1% /run
... 30G 12G 18G 41% /
Here's the inode checking result. You can see that there's no duplicated inode in 06-23 logs.
ls -il * | grep 06-23
768289 -rw-r--r-- 1 root root 10485996 Jun 23 00:33 app-2020-06-23.0.log
768372 -rw-r--r-- 1 root root 10486447 Jun 23 01:02 app-2020-06-23.1.log
768292 -rw-r--r-- 1 root root 10485819 Jun 23 05:36 app-2020-06-23.10.log
800654 -rw-r--r-- 1 root root 10499153 Jun 23 05:59 app-2020-06-23.11.log
794052 -rw-r--r-- 1 root root 10486575 Jun 23 06:32 app-2020-06-23.12.log
768487 -rw-r--r-- 1 root root 10492683 Jun 23 06:59 app-2020-06-23.13.log
800633 -rw-r--r-- 1 root root 10490445 Jun 23 07:27 app-2020-06-23.14.log
794067 -rw-r--r-- 1 root root 10500849 Jun 23 07:55 app-2020-06-23.15.log
788191 -rw-r--r-- 1 root root 10489159 Jun 23 08:28 app-2020-06-23.16.log
788410 -rw-r--r-- 1 root root 10486744 Jun 23 09:30 app-2020-06-23.17.log
800624 -rw-r--r-- 1 root root 10486794 Jun 23 10:00 app-2020-06-23.18.log
794048 -rw-r--r-- 1 root root 10490002 Jun 23 10:39 app-2020-06-23.19.log
768461 -rw-r--r-- 1 root root 10486161 Jun 23 01:36 app-2020-06-23.2.log
794051 -rw-r--r-- 1 root root 10488204 Jun 23 11:12 app-2020-06-23.20.log
794081 -rw-r--r-- 1 root root 10487146 Jun 23 11:46 app-2020-06-23.21.log
794071 -rw-r--r-- 1 root root 10492866 Jun 23 12:16 app-2020-06-23.22.log
787673 -rw-r--r-- 1 root root 10490849 Jun 23 12:51 app-2020-06-23.23.log
787698 -rw-r--r-- 1 root root 3491076 Jun 23 13:00 app-2020-06-23.24.log
768478 -rw-r--r-- 1 root root 10486306 Jun 23 02:08 app-2020-06-23.3.log
768507 -rw-r--r-- 1 root root 10486690 Jun 23 02:34 app-2020-06-23.4.log
800620 -rw-r--r-- 1 root root 10496353 Jun 23 03:00 app-2020-06-23.5.log
800623 -rw-r--r-- 1 root root 10503668 Jun 23 03:36 app-2020-06-23.6.log
768521 -rw-r--r-- 1 root root 10520722 Jun 23 04:05 app-2020-06-23.7.log
774652 -rw-r--r-- 1 root root 10487379 Jun 23 04:38 app-2020-06-23.8.log
784704 -rw-r--r-- 1 root root 10553972 Jun 23 05:05 app-2020-06-23.9.log

Class org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem not found when using -addMount in HDFS

I have the following setup:
$ export |grep HADOOP
declare -x HADOOP_HOME="/home/jesaremi/hadoop-3.1.3-bin"
declare -x HADOOP_OPTIONAL_TOOLS="hadoop-azure"
$ echo $PATH
/home/jesaremi/hadoop-3.1.3-bin/bin:/home/jesaremi/hadoop-3.1.3-bin/sbin:/opt/protobuf/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/jesaremi/.dotnet/tools:/opt/HP_Fortify/HP_Fortify_SCA_and_Apps_4.30/bin:/opt/HP_Fortify/HP_Fortify_SCA_and_Apps_4.30/bin
$ ls -l /home/jesaremi/hadoop-3.1.3-bin/share/hadoop/common/
total 7452
-rw-rw-r-- 1 jesaremi jesaremi 303043 Dec 19 17:30 hadoop-azure-3.1.3.jar
-rw-rw-r-- 1 jesaremi jesaremi 4092533 Dec 17 09:42 hadoop-common-3.1.3-SNAPSHOT.jar
-rw-rw-r-- 1 jesaremi jesaremi 2877294 Dec 17 09:42 hadoop-common-3.1.3-SNAPSHOT-tests.jar
-rw-rw-r-- 1 jesaremi jesaremi 130001 Dec 17 09:42 hadoop-kms-3.1.3-SNAPSHOT.jar
-rw-rw-r-- 1 jesaremi jesaremi 201637 Dec 17 09:42 hadoop-nfs-3.1.3-SNAPSHOT.jar
drwxrwxr-x 2 jesaremi jesaremi 4096 Dec 17 09:42 jdiff
drwxrwxr-x 2 jesaremi jesaremi 4096 Dec 17 09:42 lib
drwxrwxr-x 2 jesaremi jesaremi 4096 Dec 17 09:42 sources
drwxrwxr-x 3 jesaremi jesaremi 4096 Dec 17 09:42 webapps
Yet when I try to amount an ABFS file system I get this:
$ hdfs dfsadmin -addMount abfs://main#test1.dfs.core.windows.net/ /mnt/abfs/main "fs.azure.abfs.account.name=test1.dfs.core.windows.net,fs.azure.account.key.test1.dfs.core.windows.net=<key removed>"
addMount: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem not found
afraid that HADOOP_OPTIONAL_TOOLS env var isn't enough; you'll need to get hadoop-azure JAR and some others into common/lib
from share/hadoop/tools/lib copy hadoop-azure jar, azure-* and, if it's there, wildfly-openssl.jar into share/hadoop/common/lib
The cloudstore JAR is with diagnostics as it tells you which JAR is missing, e.g.
bin/hadoop jar cloudstore-0.1-SNAPSHOT.jar storediag abfs://c#account.dfs.core.windows.net/

rsync: failed to set times on "/cygdrive/e/.": Invalid argument (22)

I get the below error message when I try to rsync from a local hard disk to a USB disk mounted at E: on Windows 10.
rsync: failed to set times on "/cygdrive/e/.": Invalid argument (22)
My rsync command is as below (path shortened for brevity):
rsync -rtv --delete --progress --modify-window=5 /cygdrive/d/path/to/folder/ /cygdrive/e/
I actually need to set modification times (on directories as well) and rsync actually sets modification times perfectly. It only fails to set times on root of the USB disk.
I experienced exactly the same problem.
I created a dir containing one text file and when trying to rsync it to an removable (USB) drive, I got the error. However, the file was copied to the destination. The problem is not reproducible if the destination is a folder (other than root) on the removable drive
I then repeated the process using a fixed drive as destination, and the problem was not reproducible
The 1st difference that popped up between the 2 drives, was the file system (for more details, check [MS.Docs]: File Systems Technologies):
FAT32 - on the removable drive
NTFS - on the fixed one
So this was the cause of my failure. Formatting the USB drive as NTFS fixed the problem:
The USB drive formatted as FAT32 (default):
cfati#cfati-e5550-0 /cygdrive/e/Work/Dev/StackOverflow/q045006385
$ ll /cygdrive/
total 20
dr-xr-xr-x 1 cfati None 0 Jul 14 17:58 .
drwxrwx---+ 1 cfati None 0 Jun 9 15:04 ..
d---r-x---+ 1 NT SERVICE+TrustedInstaller NT SERVICE+TrustedInstaller 0 Jul 13 22:21 c
drwxrwx---+ 1 SYSTEM SYSTEM 0 Jul 14 13:19 e
drwxr-xr-x 1 cfati None 0 Dec 31 1979 n
drwxr-xr-x 1 cfati None 0 Dec 31 1979 w
cfati#cfati-e5550-0 /cygdrive/e/Work/Dev/StackOverflow/q045006385
$ rsync -rtv --progress --modify-window=5 ./dir/ /cygdrive/w
sending incremental file list
rsync: failed to set times on "/cygdrive/w/.": Invalid argument (22)
./
a.txt
3 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=0/2)
sent 111 bytes received 111 bytes 444.00 bytes/sec
total size is 3 speedup is 0.01
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1196) [sender=3.1.2]
cfati#cfati-e5550-0 /cygdrive/e/Work/Dev/StackOverflow/q045006385
$ ll /cygdrive/
total 20
dr-xr-xr-x 1 cfati None 0 Jul 14 17:58 .
drwxrwx---+ 1 cfati None 0 Jun 9 15:04 ..
d---r-x---+ 1 NT SERVICE+TrustedInstaller NT SERVICE+TrustedInstaller 0 Jul 13 22:21 c
drwxrwx---+ 1 SYSTEM SYSTEM 0 Jul 14 13:19 e
drwxr-xr-x 1 cfati None 0 Dec 31 1979 n
drwxr-xr-x 1 cfati None 0 Dec 31 1979 w
After formatting the USB drive as NTFS:
cfati#cfati-e5550-0 /cygdrive/e/Work/Dev/StackOverflow/q045006385
$ ll /cygdrive/
total 24
dr-xr-xr-x 1 cfati None 0 Jul 14 17:59 .
drwxrwx---+ 1 cfati None 0 Jun 9 15:04 ..
d---r-x---+ 1 NT SERVICE+TrustedInstaller NT SERVICE+TrustedInstaller 0 Jul 13 22:21 c
drwxrwx---+ 1 SYSTEM SYSTEM 0 Jul 14 13:19 e
drwxr-xr-x 1 cfati None 0 Dec 31 1979 n
drwxrwxrwx+ 1 Administrators Administrators 0 Jul 14 17:59 w
cfati#cfati-e5550-0 /cygdrive/e/Work/Dev/StackOverflow/q045006385
$ rsync -rtv --progress --modify-window=5 ./dir/ /cygdrive/w
sending incremental file list
./
a.txt
3 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=0/2)
sent 111 bytes received 38 bytes 298.00 bytes/sec
total size is 3 speedup is 0.02
cfati#cfati-e5550-0 /cygdrive/e/Work/Dev/StackOverflow/q045006385
$ ll /cygdrive/
total 24
dr-xr-xr-x 1 cfati None 0 Jul 14 17:59 .
drwxrwx---+ 1 cfati None 0 Jun 9 15:04 ..
d---r-x---+ 1 NT SERVICE+TrustedInstaller NT SERVICE+TrustedInstaller 0 Jul 13 22:21 c
drwxrwx---+ 1 SYSTEM SYSTEM 0 Jul 14 13:19 e
drwxr-xr-x 1 cfati None 0 Dec 31 1979 n
drwxrwxrwx+ 1 Administrators Administrators 0 Jul 14 13:19 w
As a side note, when I was at step #2., I was an idiot and kept the --delete arg, so til I hit Ctrl + C, it deleted some data. Luckily, it didn't get to delete crucial files / folders.

Sorting ls output by file size

I am currently sorting the output of ls -l by byte count using:
ls -l | sort -r -k5,5 -n
What if I wanted to make this work with the -# flag? Currently this will output:
-rwxr-xr-x# 1 name staff 7106 2 May 10:43 c
-rwxr-xr-x 1 name staff 675 22 Apr 17:57 a
-rwxr-xr-x 1 name staff 486 23 Apr 07:56 b
drwxr-xr-x 4 name staff 136 25 Apr 18:38 d
-rwxr-xr-x 1 name staff 120 23 Apr 07:59 e
-rwxr-xr-x 1 name staff 112 22 Apr 18:45 g
-rwxr-xr-x 1 name staff 51 22 Apr 18:45 f
total 56
com.apple.metadata:_kMDItemUserTags 42
Where I want it to take the extended attribute keys line and keep it below the appropriate file like so:
-rwxr-xr-x# 1 name staff 7106 2 May 10:43 c
com.apple.metadata:_kMDItemUserTags 42
-rwxr-xr-x 1 name staff 675 22 Apr 17:57 a
-rwxr-xr-x 1 name staff 486 23 Apr 07:56 b
drwxr-xr-x 4 name staff 136 25 Apr 18:38 d
-rwxr-xr-x 1 name staff 120 23 Apr 07:59 e
-rwxr-xr-x 1 name staff 112 22 Apr 18:45 g
-rwxr-xr-x 1 name staff 51 22 Apr 18:45 f
total 56
No need to use sort, just use the -S option with ls
ls -Sl
(that's an upper case S)

How to include a library in the path while compiling?

I'm reading this post about go and was trying to compile the source code found here
I downloaded the source code, compiled the first file with make and I can see the object is generated:
$pwd
/Users/oscarryz/code/go/rsc/rosetta/graph
$ls -ltR
total 136
-rw-r--r-- 1 oscarryz staff 61295 Sep 17 16:20 _go_.6
drwxr-xr-x 3 oscarryz staff 102 Sep 17 16:20 _obj
-rw-r--r-- 1 oscarryz staff 126 Sep 17 16:17 Makefile
-rw-r--r-- 1 oscarryz staff 2791 Sep 17 16:17 graph.go
./_obj:
total 0
drwxr-xr-x 3 oscarryz staff 102 Sep 17 16:20 rsc.googlecode.com
./_obj/rsc.googlecode.com:
total 0
drwxr-xr-x 3 oscarryz staff 102 Sep 17 16:20 hg
./_obj/rsc.googlecode.com/hg:
total 0
drwxr-xr-x 3 oscarryz staff 102 Sep 17 16:20 rosetta
./_obj/rsc.googlecode.com/hg/rosetta:
total 136
-rw-r--r-- 1 oscarryz staff 68486 Sep 17 16:20 graph.a
No my question is, how do I refer to that compiled code from the maze directory:
/Users/oscarryz/code/go/rsc/rosetta/maze/maze.go
Whose import declarations are:
import (
"bytes"
"fmt"
"rand"
"time"
"rsc.googlecode.com/hg/rosetta/graph"
)
And right now is failing to compile with the error message:
6g -o _go_.6 maze.go
maze.go:20: can't find import: rsc.googlecode.com/hg/rosetta/graph
make: *** [_go_.6] Error 1
Ok, I found it, wasn't that hard.
6g flags: -I DIR search for packages in DIR
I have to specify the -I option like this:
6g -I ../graph/_obj/ -o _go_.6 maze.go

Resources