I am trying to learn how JuiceFS works.
So if I had two computers on the LAN with 192.168.1.11 and 192.168.1.12 IP addresses, and a third with 192.168.1.100 (for metadata), and a fourth computer on the same LAN where I did this:
juicefs format \
--storage sftp \
--bucket 192.168.1.11:myjfs/ \ #remote sftp/ssh addresss and path
--access-key tom \ #user
--secret-key 123456 \ #pass
redis://192.168.1.100:6379/1 myjfs #metadata db
juicefs format \
--storage sftp \
--bucket 192.168.1.12:myjfs/ \ #remote sftp/ssh addresss and path
--access-key tom \ #user
--secret-key 123456 \ #pass
redis://192.168.1.100:6379/1 myjfs #metadata db
Can I mount the JuiceFS like this:
juicefs mount redis://192.168.1.100:6379/1 ~/jfs
Will it appear as a filesystem with storage capacity of the two computer (192.168.1.11 and 192.168.1.12)?
Nope, a juicefs filesystem composed of one database and one storage. So, only:
juicefs format \
--storage sftp \
--bucket 192.168.1.11:myjfs/ \ #remote sftp/ssh addresss and path
--access-key tom \ #user
--secret-key 123456 \ #pass
redis://192.168.1.100:6379/1 myjfs #metadata db
or only
juicefs format \
--storage sftp \
--bucket 192.168.1.12:myjfs/ \ #remote sftp/ssh addresss and path
--access-key tom \ #user
--secret-key 123456 \ #pass
redis://192.168.1.100:6379/1 myjfs #metadata db
is enough.
After the first command to create the file system is executed successfully, the second command will report an error.
So, when you mount the filesystem via:
juicefs mount redis://192.168.1.100:6379/1 ~/jfs
You will be using the storage on 192.168.1.11:myjfs/, not 192.168.1.12:myjfs/.
Of course, if you wish to make full use of the storage space on both PCs, then you can format two file systems for use, noting that each file system should use a separate db. For example:
juicefs format \
--storage sftp \
--bucket 192.168.1.11:myjfs/ \ #remote sftp/ssh addresss and path
--access-key tom \ #user
--secret-key 123456 \ #pass
redis://192.168.1.100:6379/1 myjfs1 #metadata db
juicefs format \
--storage sftp \
--bucket 192.168.1.12:myjfs/ \ #remote sftp/ssh addresss and path
--access-key tom \ #user
--secret-key 123456 \ #pass
redis://192.168.1.100:6379/2 myjfs2 #metadata db
Please note that the filesystem name and the number of redis db, and then you can mount them via:
juicefs mount redis://192.168.1.100:6379/1 ~/jfs1
juicefs mount redis://192.168.1.100:6379/2 ~/jfs2
Related
Use this way to create an instance on aws:
docker-machine create \
-d amazonec2 \
--amazonec2-region ap-northeast-1 \
--amazonec2-zone a \
--amazonec2-ami ami-XXXXXX \
--amazonec2-keypair-name my_key_pair \
--amazonec2-ssh-keypath ~/.ssh/id_rsa \
my_instance
Can't connect to it by ssh.
The my_key_pare is a name that exist on aws. The ~/.ssh/id_rsa is local ssh private key. How to set the right value?
I have read the document but didn't find an example of using both --amazonec2-keypair-name and --amazonec2-ssh-keypath.
Download the file from "Key Pairs" in AWS Console and place it in ~/.ssh.
Then run
docker-machine create \
-d amazonec2 \
--amazonec2-region ap-northeast-1 \
--amazonec2-zone a \
--amazonec2-ami ami-XXXXXX \
--amazonec2-ssh-keypath ~/.ssh/keypairfile \
my_instance
In gitlab-runner using Docker+machine, you have to provide both "amazonec2-keypair-name=XXX",
"amazonec2-ssh-keypath=XXX".
the Keypath should be like /home/gitlab-runner/.ssh/id_rsa and the path should also have id_rsa.pub file with id_rsa. These two file should not be your local generated key, should be the id_rsa, id_rsa.pub of Pem created
The following commands will do the trick,
cat faregate-test.pem > /home/gitlab-runner/.ssh/id_rsa
ssh-keygen -y -f faregate-test.pem > /home/gitlab-runner/.ssh/id_rsa.pub
And This will allow you to connect from runner manager instance to the runner provisioned by using your AWS existing keypair...
I am trying to upload files from my local computer to a server via ssh for deployment. In the upload, I want to exclude some files like .pyc and BUILD.
I have managed to exclude all the files, but the ones called BUILD.
This is currently my (dry-run) terminal command:
rsync -e ssh --dry-run \
--recursive --archive --verbose \
--delete \
--exclude='*.pyc' \
--exclude='*.scss' \
--exclude='__*.js' \
--exclude='*BUILD' \
--exclude='*.jar' \
--exclude='*.DS_Store' \
--exclude='__pycache__' \
local_folder/ \
server:server_folder/
All the exclusions work, except BUILD.
I tried:
--exclude='*/BUILD'
--exclude='*BUILD'
--exclude='BUILD'
None of the previous seems to have detected and deleted the existing BUILD files.
Any ideas on how I can exclude these files?
Thank you!
The command seems to be working but could be that the BUILD files already existed previously.
If you have excluded files or directories from being transferred, --delete-excluded will remove them from the destination side, so this should work:
rsync -e ssh --dry-run \
--recursive --archive --verbose \
--exclude='*.pyc' \
--exclude='*.scss' \
--exclude='__*.js' \
--exclude='*BUILD' \
--exclude='*.jar' \
--exclude='*.DS_Store' \
--exclude='__pycache__' \
--delete-excluded \
local_folder/ \
server:server_folder/
To complement check also this answer which explain the delete options in rsync https://superuser.com/a/156702/284722
I'm running into an odd behavior on the latest version of vagrant in a Windows7/msys/Virtualbox environment setup, where after executing a vagrant up command I get an error with rsync; 'file has vanished: "/c/Users/spencerd/workspace/watcher/.LISTEN' doing the provisioning stage.
Since google, irc, and issue trackers have little to no documentation on this issue I wonder if anyone else ran into this and what would the fix be?
And for the record I have successfully build a box using the same vagrant file and provisioning script. For those that want to look, the project code is up at https://gist.github.com/denzuko/a6b7cce2eae636b0512d, with the debug log at gist.github.com/
After digging further into the directory structure and running into issues with git pushing code up I was able to find a non-existant file that needed to be removed after a reboot.
Thus, doing a reboot and a rm -rf -- "./.LISTEN\ \ \ \ \ 0\ \ \ \ \ \ 100\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ " did the trick.
I'm unable to bootstrap my server because "knife ec2 server create" keeps expanding my runlist to "roles".
knife ec2 server create \
-V \
--run-list 'role[pgs]' \
--environment $1 \
--image $AMI \
--region $REGION \
--flavor $PGS_INSTANCE_TYPE \
--identity-file $SSH_KEY \
--security-group-ids $PGS_SECURITY_GROUP \
--subnet $PRIVATE_SUBNET \
--ssh-user ubuntu \
--server-connect-attribute private_ip_address \
--availability-zone $AZ \
--node-name pgs \
--tags VPC=$VPC
Consistently fails because 'roles[pgs]' is expanded to 'roles'. Why is this? Is there some escaping or alternative method I can use?
I'm currently working around this by bootstrapping with an empty run-list and then overriding the runlist by running chef-client once the node is registered.
This is a feature of bash. [] is a wildcard expander. You should can escape the brackets using "\".
I have a problem with my spec file. When I run it with rpmbuild it says it can't find ./configure no such file or directory. Here is a part of the code of my spec file. Can someone help me?
...
BuildRequires: gd-devel > 1.8, mailx, libjpeg-devel, libpng-devel
Requires: httpd php53 gcc
%description
Nagios is a program that will monitor hosts and services on your
network.
%package common
Group: Applications/System
Summary: Provides common directories, uid and gid among nagios-related packages
Requires(pre): shadow-utils
Requires(post): shadow-utils
Provides: user(nagios)
Provides: group(nagios)
%description common
Provides common directories, uid and gid among nagios-related packages.
%prep
%setup -q -n %{name}-%{version}
%build
%configure \
--prefix=%{_datadir}/%{name} \
--exec-prefix=%{_localstatedir}/lib/%{name} \
--with-init-dir=%{_initrddir} \
--with-cgiurl=/%{name}/cgi-bin/ \
--with-htmlurl=/%{name} \
--with-lockfile=%{_localstatedir}/run/%{name}.pid \
--libdir=%{_libdir}/%{name} \
--with-nagios-user=nagios \
--with-nagios-grp=nagios \
--bindir=%{_sbindir} \
--libexecdir=%{_libdir}/%{name}/plugins \
--sysconfdir=%{_sysconfdir}/%{name} \
--localstatedir=%{_localstatedir}/log/%{name} \
--datadir=%{_datadir}/%{name}/html \
--with-gd-lib=%{_libdir} \
--with-gd-inc=%{_includedir} \
--enable-embedded-perl \
--with-perlcache \
...
I am not familiar with nagios, but have you confirmed that when you extract the distribution tarball, there is a configure file in the top-level directory? If not, you need to add the steps to get there.