I installed Openfoam a couple of days ago . I could follow the instructions given here
without facing any major issues. However, when I run the command $mkdir -p $FOAM_RUN today, I am getting this error: mkdir: missing operand. I also tried mkdir -p "$FOAM_RUN" (it is suggested here). Even this failed with the error message mkdir: cannot create directory `': No such file or directory. /.bashrc file is updated as per the instructions given on Openfoam website.
Seems to me that your variable $FOAM_RUN is not defined. You can confirm that by executing the command: echo $FOAM_RUN
Like #unxnut said, your variable is not defined.
To define the variable, you just type:
FOAM_RUN=dir_name
then your above command will create a directory called "dir_name", or whatever you assign FOAM_RUN.
Hope this helps.
Related
I have two issues I need help with on bash, linux and s3cmd.
First, I'm running into linux permission issue. I am trying to download zip files from a s3 bucket using s3cmd with following command in a bash script.sh:
/usr/bin/s3cmd get s3://<bucketname>/<folder>/zipfilename.tar.gz
I am seeing following error: permission denied.
If I try to run this command manually from command line on a linux machine, it works and downloads the file:
sudo /usr/bin/s3cmd get s3://<bucketname>/<folder>/zipfilename.tar.gz
I really don't want to use sudo in front of the command in the script. How do I get this command to work? Do I need to give chown permission to the script.sh which is actually sitting in a path i.e /foldername/script.sh or how do I get this get command to work?
Two: Once I get this command to work, How do I get it to download from s3 to the linux home dir: ~/ ? Do I have to specifically issue a command in the bash script: cd ~/ before the above download command?
I really appreciate any help and guidance.
First, determine what's failing and the reason, otherwise you won't find the answer.
You can specify the destination in order to avoid permission problems when the script is invoked using a directory that's not writeable by that process
/usr/bin/s3cmd get s3:////zipfilename.tar.gz /path/to/writeable/destination/zipfilename.tar.gz
Fist of all ask 1 question at a time.
For the first one you can simply change the permission with chown like :
chown “usertorunscript” filename
For the second :
If it is users home directory you can just specify it with
~user
as you said but I think writing the whole directory is safer so it will work for more users (if you need to)
I am a rookie in Python who has been working on Learn Python the Hard Way. This whole process goes well as I have a smattering knowledge on Python until I march into ex46 where I get stuck in the 'Creating the skeleton Project Directory' section. I have no idea where I should run those commands guided on this book. Following are the excerpt of this part:
First, create the structure of your skeleton directory with these commands:
$ mkdir projects
$ cd projects/
$ mkdir skeleton
$ cd skeleton
$ mkdir bin
$ mkdir NAME
$ mkdir tests
$ mkdir docs
I have tried to run these commands in Windows Powershell, only to be warned that these commands can’t be recognized. I also fumbled to execute them in Pycharm, but all in vain. Could someone point out how I could get it done?
In addition, I am somewhat curious about this method because there seems to be handy way to approach this on Pycharm. Could I achieve the same goal on that?
I am using Python 2.7 and all previous exercises operate well until ex46.
You get that error because you're typing a superfluous $ at the beginning of each command. That $ is the (Linux) command prompt. On your Windows machine, it's something like C:\WINDOWS\system32>. Don't type that.
Just type
mkdir projects
and press Enter. That creates a folder (directory) named "projects". Then type
cd projects
and press Enter. That changes the current directory to that new folder you just created. And so on.
Content migrated from comments since this is what actually solved the issue.
Remove the dollar sign $ from the statement, as this is just the symbol used as a CLI prompt not part of the statement itself.
Then type the mkdir command and it should work e.g.
mkdir my_directory
I'm just setting up Kaldi for the first time and going through the tidigits example. However with run.sh, I get:
steps/make_mfcc.sh --cmd run.pl --mem 2G --nj 20 data/test exp/make_mfcc/test mfcc
utils/validate_data_dir.sh: Successfully validated data-directory data/test
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
run.pl: 20 / 20 failed, log is in exp/make_mfcc/test/make_mfcc_test.*.log
Looking at the log files, I see the issue is:
bash: line 1: compute-mfcc-feats: command not found
bash: line 1: copy-feats: command not found
This seems to be a PATH issue, and looking at other forums online seems to confirm this. However I'm not sure how to resolve the PATH issue. I've traced that compute-mfcc-feats and copy-feats commands are called in make_mfcc.sh in the steps folder (supposedly a symlink to the wsj example). Please help!
Path to executables is configured with KALDI_ROOT variable in Kaldi recipes in path.sh script inside the recipe, for example, inside tidigits it is kaldi/egs/tidigits/s5/path.sh. The path specified is relative, so you must run commands from kaldi/egs/tidigits/s5 folder and not from other folder. There could be following problems
You didn't compile Kaldi and binary does not exist in
kaldi/src/featbin
You moved the training folder from kaldi and
you didn't update the KALDI_ROOT variable in path.sh
You run the command run.sh from some other folder, not from
kaldi/egs/tidigits/s5 folder.
Usually you simply need to check contents of path.sh and specify the proper kaldi root there.
I tried to install hadoop using the below link.
"http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php"
I was moving the files to /usr/local/hadoop.But i got the following error.
hduser#vijaicricket-Lenovo-G50-70:~$ ~/hadoop-2.6.0$ sudo mv * /usr/local/hadoop
-bash: /home/hduser/hadoop-2.6.0$: No such file or directory
Where did you extract the hadoop tar file? As looking at the error from shell looks like " /home/hduser/hadoop-2.6.0" directory doesnt exist. Also make sure the valid permission user has.
I am doing prep work for app academy. The final stage before I am done with my prep work is to complete a ruby intro course called "Test First Ruby".
The first line after you install Rspec is to enter the course directory. In the terminal it is "cd learn_ruby", simple enough, except it returns back a message that says "the system cannot find the path specified". I have been noticing this message on certain commands for all of my ruby learning thus far and I am just wondering what does this mean? And how can I fix this?
Any help would be great.
cd means 'change directory'. The error you are getting is that the directory is not existant in the location your command line is in.
This looks like a reasonable intro to UNIX filesystem: http://www.doc.ic.ac.uk/~wjk/UnixIntro/Lecture2.html
UNIX reference: http://sunsite.utk.edu/UNIX-help/quickref.html
Create the directory before you cd into it:
mkdir learn_ruby
cd learn_ruby
Do check out AJcodez's links to familiarize yourself with Unix filesystem commands.