Graphviz cannot open image, Ubuntu - graphviz

I am having problem including image in node in graphviz. I tried both including image in node via image="name.svg" and via <IMG SRC="name.svg" /> in HTML-like label. I also tried using different image formats (svg, jpg, png), but when exporting I always get message:
Warning: No such file or directory while opening name.svg
Warning: No or improper image="name.svg" for node "somenode"
.dot file looks like:
digraph {
node1[image="image.svg", label=""]
node2[ label=
<A
<IMG SRC="image.svg" />> ]
node1 -> node2 }
image.svg is located in same directory as .dot file (output of ls -la):
total 96
drwxrwxr-x 2 filip filip 4096 maj 8 07:55 .
drwxr-xr-x 34 filip filip 4096 maj 8 07:56 ..
-rw-rw-r-- 1 filip filip 85030 maj 7 17:05 image.svg
-rw-rw-r-- 1 filip filip 117 maj 8 07:54 mygraph.dot
My OS is Ubuntu 20.04.3 LTS.
I have already looked at many other questions regarding this problem, but none of the solutions given seem to help.
Any suggestions?

When retrying whole process in different ways I found how it should be done, although I don't have enough knowledge to understand why this happens.
The thing is, command dot -Tsvg mygraph.dot > mygraph.svg must be run from the directory where mygraph.dot is stored. Also, image.svg must be located in this same directory or in one of the subdirectories. If, for example, mygraph.dot is stored inside dir1, running
dot -Tsvg dir1/mygraph.dot > dir1/mygraph.svg
from the parent directory of dir1 won't work.
Also, image inside HTML-like label should be inside a table:
node[ label=
<<TABLE><TR><TD>
<IMG
SRC="/home/filip/dir2/images/image.svg"
/>
</TD></TR></TABLE>>
]
It is possible all this is stated somewhere in the documentation and I just didn't notice.
Also, maybe worth mentioning is that on Ubuntu, if I wanted image.svg embedded in output file, I had to additionaly install librsvg2-dev and librgraphviz-dev packages, then run:
dot -Tsvg:cairo mygraph.dot > mygraph.svg

Related

How to insert the configuration.nix file inside my dot files?

I am creating my dot files following this tutorial. It successfully works for emacs.d.
Since I am using NixOS, I tried doing exactly the same steps with symlink creation for the configuration.nix file. Thus, I did:
1 - On terminal:
[pedro#system:/etc/nixos]$ sudo mv /etc/nixos/configuration.nix ~/.dotfiles/
2 - Then:
[pedro#system:/etc/nixos]$ ln -sf ~/.dotfiles/configuration.nix configuration.nix~
3 - It seems to work fine, as I do:
[pedro#system:/etc/nixos]$ ls -la
total 12
drwxr-xr-x 2 root root 4096 Dec 1 21:41 .
drwxr-xr-x 32 root root 4096 Dec 1 22:00 ..
lrwxrwxrwx 1 root root 39 Dec 1 21:41 configuration.nix~ -> /home/pedro/.dotfiles/configuration.nix
-rw-r--r-- 1 root root 842 Nov 12 17:40 hardware-configuration.nix
After doing some editions and saving the changes, I can't do nixos-rebuild switch, though. It throws an error:
[pedro#system:/etc/nixos]$ sudo nixos-rebuild switch
warning: Nix search path entry '/etc/nixos/configuration.nix' does not exist, ignoring
error: file 'nixos-config' was not found in the Nix search path (add it using $NIX_PATH or -I), at /nix/var/nix/profiles/per-user/root/channels/nixos/nixos/default.nix:1:60
(use '--show-trace' to show detailed location information)
building Nix...
warning: Nix search path entry '/etc/nixos/configuration.nix' does not exist, ignoring
error: file 'nixos-config' was not found in the Nix search path (add it using $NIX_PATH or -I), at /nix/var/nix/profiles/per-user/root/channels/nixos/nixos/default.nix:1:60
(use '--show-trace' to show detailed location information)
building the system configuration...
warning: Nix search path entry '/etc/nixos/configuration.nix' does not exist, ignoring
error: file 'nixos-config' was not found in the Nix search path (add it using $NIX_PATH or -I), at /nix/var/nix/profiles/per-user/root/channels/nixos/nixos/default.nix:1:60
(use '--show-trace' to show detailed location information)
The ~ after configuration.nix~ might be the problem here. How can I fix this?
Thanks!
Your Step 2 seems to have cause the issue here: The symlink should be called configuration.nix not configuration.nix~ as you have noticed.
You could fix this by running mv configuration.nix~ configuration.nix in the /etc/nixos folder which would rename configuration.nix~ to the correct configuration.nix.

Where does hugging face's transformers save models?

Running the below code downloads a model - does anyone know what folder it downloads it to?
!pip install -q transformers
from transformers import pipeline
model = pipeline('fill-mask')
Update 2021-03-11: The cache location has now changed, and is located in ~/.cache/huggingface/transformers, as it is also detailed in the answer by #victorx.
This post should shed some light on it (plus some investigation of my own, since it is already a bit older).
As mentioned, the default location in a Linux system is ~/.cache/torch/transformers/ (I'm using transformers v 2.7, currently, but it is unlikely to change anytime soon.). The cryptic folder names in this directory seemingly correspond to the Amazon S3 hashes.
Also note that the pipeline tasks are just a "rerouting" to other models. To know which one you are currently loading, see here. For your specific model, pipeline(fill-mask) actually utilizes a distillroberta-base model.
As of Transformers version 4.3, the cache location has been changed.
The exact place is defined in this code section ​https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L181-L187
On Linux, it is at ~/.cache/huggingface/transformers.
The file names there are basically SHA hashes of the original URLs from which the files are downloaded. The corresponding json files can help you figure out what are the original file names.
On windows 10, replace ~ with C:\Users\username or in cmd do cd /d "%HOMEDRIVE%%HOMEPATH%".
So full path will be: C:\Users\username\.cache\huggingface\transformers
As of transformers 4.22, the path appears to be (tested on CentOS):
~/.cache/huggingface/hub/
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="sentence-transformers/all-MiniLM-L6-v2", filename="config.json")
ls -lrth ~/.cache/huggingface/hub/models--sentence-transformers--all-MiniLM-L6-v2/snapshots/7dbbc90392e2f80f3d3c277d6e90027e55de9125/
total 4.0K
lrwxrwxrwx 1 alex alex 52 Jan 25 12:15 config.json -> ../../blobs/72b987fd805cfa2b58c4c8c952b274a11bfd5a00
lrwxrwxrwx 1 alex alex 76 Jan 25 12:15 pytorch_model.bin -> ../../blobs/c3a85f238711653950f6a79ece63eb0ea93d76f6a6284be04019c53733baf256
lrwxrwxrwx 1 alex alex 52 Jan 25 12:30 vocab.txt -> ../../blobs/fb140275c155a9c7c5a3b3e0e77a9e839594a938
lrwxrwxrwx 1 alex alex 52 Jan 25 12:30 special_tokens_map.json -> ../../blobs/e7b0375001f109a6b8873d756ad4f7bbb15fbaa5
lrwxrwxrwx 1 alex alex 52 Jan 25 12:30 tokenizer_config.json -> ../../blobs/c79f2b6a0cea6f4b564fed1938984bace9d30ff0

Why does Mathematica WolframScript get file fail?

I use Mathematica 11 and create a project containing two files: a package file named MyPackage.m and the other named run.m.The package file contains just normal functions not in special Mathematica package structure (https://reference.wolfram.com/workbench/index.jsp?topic=/com.wolfram.eclipse.help/html/tasks/applications/packages.html) and the other contains code to get MyPackage.m and use the functions.
(* Package.m *)
myFun[x_String] := Print[x]
...
(* run.m *)
<<"Package.m"
myFun["Hello,World"]
I put these two files into one directory and ensure that the $path contains the directory path. But, when I run wolframscript -file ./run.m -print all, it complains $Failed.
The question is: how to import another file when using wolframscript? It seems cannot find the destination file even they are in the same directory.
I use Mathematica 11 and run wolframscript in Ubuntu server where I have installed the latest Free CDF Player.
I encountered no problem running your script. Also, -print all appears to be superfluous.
C:\Users\chrisd\Documents\test>dir
Volume in drive C is Windows7_OS
Volume Serial Number is 102A-B66B
Directory of C:\Users\chrisd\Documents\test
14/09/2017 15:03 <DIR> .
14/09/2017 15:03 <DIR> ..
14/09/2017 14:59 29 Package.m
14/09/2017 14:59 38 run.m
2 File(s) 67 bytes
2 Dir(s) 215,590,776,832 bytes free
C:\Users\chrisd\Documents\test>wolframscript -file run.m -print all
Hello,World
C:\Users\chrisd\Documents\test>type Package.m
myFun[x_String] := Print[x]
C:\Users\chrisd\Documents\test>type run.m
<<"Package.m"
myFun["Hello,World"];
C:\Users\chrisd\Documents\test>

Use newsyslog to rotate log files, but only if they have a certain size

I'm on OS X 10.9.4 and trying to use newsyslog to rotate my app development log files.
More specifically, I want to rotate the files daily but only if they are not empty (newsyslog writes one or two lines to every logfile it rotates, so let's say I only want to rotate logs that are at least 1kb).
I created a file /etc/newsyslog.d/code.conf:
# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/Users/manuel/code/**/log/*.log manuel:staff 644 7 1 $D0 GN
The way I understand the man page for the configuration file is that size and when conditions should work in combination, so logfiles should be rotated every night at midnight only if they are 1kb or larger.
Unfortunately this is not what happens. The log files are rotated every night, no matter if they only the rotation message from newsyslog or anything else:
~/code/myapp/log (master) $ ls
total 32
drwxr-xr-x 6 manuel staff 204B Aug 8 00:17 .
drwxr-xr-x 22 manuel staff 748B Jul 25 14:56 ..
-rw-r--r-- 1 manuel staff 64B Aug 8 00:17 development.log
-rw-r--r-- 1 manuel staff 153B Aug 8 00:17 development.log.0
~/code/myapp/log (master) $ cat development.log
Aug 8 00:17:41 localhost newsyslog[81858]: logfile turned over
~/code/myapp/log (master) $ cat development.log.0
Aug 7 00:45:17 Manuels-MacBook-Pro newsyslog[34434]: logfile turned over due to size>1K
Aug 8 00:17:41 localhost newsyslog[81858]: logfile turned over
Any tips on how to get this working would be appreciated!
What you're looking for (rotate files daily unless they haven't logged anything) isn't possible using newsyslog. The man page you referenced doesn't say anything about size and when being combined other than to say that if when isn't specified, than it is as-if only size was specified. The reality is that the log is rotated when either condition is met. If the utility is like its FreeBSD counterpart, it won't rotate logs less than 512 bytes in size unless the binary flag is set.
MacOS' newer replacement for newsyslog, ASL, also doesn't have the behavior you desire. As far as I know, the only utility which has this is logrotate using its notifempty configuration option. You can install logrotate on your Mac using Homebrew

Hadoop Log File Analysis from 2 separate machines

I am a fresher to Hadoop. I have to find the trend of symbols traded among users.
I have 2 machines b040n10 and b040n11. The files in the machine are as mentioned below:
b040n10:/u/ssekar>ls -lrt
-rw-r--r-- 1 root root 482342353 Feb 8 2014 A.log
-rw-r--r-- 1 root root 481231231 Feb 8 2014 B.log
b040n11:/u/ssekar>ls -lrt
-rw-r--r-- 1 root root 412312312 Feb 8 2014 C.log
-rw-r--r-- 1 root root 412356315 Feb 8 2014 D.log
There is a field called "symbol_name" on all these logs (example below).
IP=145.45.34.2;***symbol_name=ABC;***timestamp=12:13:05
IP=145.45.34.2;***symbol_name=XYZ;***timestamp=12:13:56
IP=145.45.34.2;***symbol_name=ABC;***timestamp=12:14:56
I have Hadoop running on my Laptop and I have 2 machines connected to my Laptop (can be used as Datanodes).
My task now is to get the list of symbol_name and the Symbol count.
As mentioned below:
ABC - 2
XYZ - 1
Should I now:
1. copy all the files (A.log,B.log,C.log,D.log) from b040n10 and b040n11 to my Laptop,
2. Issue a copyFromLocal command to HDFS system and analyze the data?
or is there a better way to findout the symbol_name and count without copying these files to my laptop?
The question is a basic one, but I am new to Hadoop, please help me to understand and use Hadoop to better. Please let me know if more information on the question is need.
Thanks
Copying the files from Hadoop to your local laptop defies the entire purpose of Hadoop which is to move the processing to the data not the other way. Because when you really have "BigData", you won't be able to move the data around to process it locally.
Your problem is a typical case of Map/Reduce, all what you need is a job that counts the occurrence of each symbol. Just search for Map/Reduce WordCount example and adapt it to your case

Resources