Cannot calculate MAC address: Using fd 7 for I/O notifications
hv_vm_create HV_ERROR (unspecified error)
[![enter image description here][2]][2g
STEP 1: Take a backup of your current files
cp -rp ~/.bitnami ~/.bitnami.back
STEP 2: Download the new hyperkit binary
cd /tmp
curl -LJO "https://downloads.bitnami.com/files/hyperkit/hyperkit-testing-20210430"
STEP 3: Ensure that the md5 checksum matches with this one
md5 /tmp/hyperkit-testing-20210430
Results => MD5 (/tmp/hyperkit-testing-20210430) = 37495adde6a3279dd7265904b85c3dc9
Warning: Do not continue with the next step if the md5 checksum doesn’t match
STEP 4: Replace your current hyperkit binary with the downloaded one
mv /tmp/hyperkit-testing-20210430 ~/.bitnami/stackman/helpers/hyperkit
chmod +x ~/.bitnami/stackman/helpers/hyperkit
CREDITS : https://community.bitnami.com/t/bitnami-wordpress-mac-os-11-3-big-sur-error-starting-wordpress-stack/94776/9
For more information please refer the above link
Related
I followed up a tutorial (https://blobtoolkit.genomehubs.org/install/) based on 2. Fetch the nt database follows up
first step 1.mkdir -p nt (I am done with that part)
second step 2.
wget "ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt.??.tar.gz" -P nt/ && \
for file in nt/*.tar.gz; \
do tar xf $file -C nt && rm $file; \
done
If I copied and paste the second step command, it won't work maybe I am not sure what
&& \
for file in nt/*.tar.gz; \
do tar xf $file -C nt && rm $file; \
done
means, so I tried using
wget "ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt/*.tar.gz"
first, but I received this error messages:
Resolving ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)... 130.14.250.13, 2607:f220:41e:250::13, 2607:f220:41e:250::11, ...
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|130.14.250.13|:21... failed: Connection refused.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::13|:21... failed: Network is unreachable.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::11|:21... failed: Network is unreachable.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::10|:21... failed: Network is unreachable.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::12|:21... failed: Network is unreachable.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::7|:21... failed: Network is unreachable.
Any idea what the problem is ? how to I adjust the second step command to download the database, please let me know , thank you.
wildcards not supported in HTTP.
http://ftp.ncbi.nlm.nih.gov/blast/db/nt/*.tar.gz Resolving ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)
The host looks like an ftp server. You shouldn't be requesting to it with http. It should be wget ftp://ftp.ncbi.... instead
I can't seem to find where in the tutorial you linked they have wget http://ftp... The command before the one you referenced (2. Fetch the nt database) is a curl command and uses ftp.
Perhaps edit the question with where in the docs it tells you to do what you did, and I can look closer.
Edit:
First try this: wget "ftp://ftp.ncbi.nlm.nih.gov". It's a simpler command. It should tell you that you logged in as anonymous.
Given more info in the question, I tried both the commands given.
The first one worked for me out of the box. I got the following output:
wget "ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt.??.tar.gz" -P nt/ && \ for file in nt/*.tar.gz; \ do tar xf $file -C nt && rm $file; \ done
--2020-11-15 13:16:30-- ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt.??.tar.gz
=> ‘nt/.listing’
Resolving ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)... 2607:f220:41e:250::13, 2607:f220:41e:250::10, 2607:f220:41e:250::11, ...
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::13|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /blast/db ... done.
==> EPSV ... done. ==> LIST ... done.
.listing [ <=> ] 43.51K 224KB/s in 0.2s
2020-11-15 13:16:32 (224 KB/s) - ‘nt/.listing’ saved [44552]
Removed ‘nt/.listing’.
--2020-11-15 13:16:32-- ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt.00.tar.gz
=> ‘nt/nt.00.tar.gz’
==> CWD not required.
==> EPSV ... done. ==> RETR nt.00.tar.gz ... done.
Length: 3937869770 (3.7G)
nt.00.tar.gz 3%[ ] 133.87M 10.2MB/s eta 8m 31s
The second one seemed to also work. Probably a typo in the file path somewhere, but nothing big.
wget "ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt/*.tar.gz"
--2020-11-15 13:17:14-- ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt/*.tar.gz
=> ‘.listing’
Resolving ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)... 2607:f220:41e:250::10, 2607:f220:41e:250::11, 2607:f220:41e:250::7, ...
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::10|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /blast/db/nt ...
No such directory ‘blast/db/nt’.
About && and \, those are just syntactic sugar. && means 'and', allowing you to chain multiple commands in one. \ means new line, so you can write a new line in the command line without it treating as you pressing enter.
Neither of these are the root of your problem.
The errors you're getting seems to be nothing to do with the actual commands and more to do with the network. Perhaps you're behind a firewall or a proxy or something. I would try the commands on a different WIFI network. Or if you know how to disable firewall settings on your router (I don't), try to fiddle around with that.
I wanna make a very simple bash script for downloading files from google drive via Drive API, so in this case there is a big file on google drive and I installed OAuth 2.0 Playground on my google drive account, then in the Select the Scope box, I choose Drive API v3, and https://www.googleapis.com/auth/drive.readonly to make a token and link.
After clicking Authorize APIs and then Exchange authorization code for tokens. I copied the Access tokenlike below.
#! /bin/bash
read -p 'Enter your id : ' id
read -p 'Enter your new token : ' token
read -p 'Enter your file name : ' file
curl -H "Authorization: Bearer $token" "https://www.googleapis.com/drive/v3/files/$id?alt=media" -o "$file"
but it won't work, any idea ?
for example the size of my file is 12G, when I run the code I will get this as output and after a second it back to prompt again ! I checked it in two computers with two different ip addresses.(I also add alt=media to URL)
-bash-3.2# bash mycode.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 166 100 166 0 0 80 0 0:00:02 0:00:02 --:--:-- 80
-bash-3.2#
the content of file that it created is like this
{
"error": {
"errors": [
{
"domain": "global",
"reason": "downloadQuotaExceeded",
"message": "The download quota for this file has been exceeded."
}
],
"code": 403,
"message": "The download quota for this file has been exceeded."
}
}
You want to download a file from Google Drive using the curl command with the access token.
If my understanding is correct, how about this modification?
Modified curl command:
Please add the query parameter of alt=media.
curl -H "Authorization: Bearer $token" "https://www.googleapis.com/drive/v3/files/$id?alt=media" -o "$file"
Note:
This modified curl command supposes that your access token can be used for downloading the file.
In this modification, the files except for Google Docs can be downloaded. If you want to download the Google Docs, please use the Files: export method of Drive API. Ref
Reference:
Download files
If I misunderstood your question and this was not the direction you want, I apologize.
UPDATE AS FOR MARCH 2021
Simply follow this guide here. It worked for me.
In summary:
For small files to download run
wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O FILENAME
While if you are trying to download a quite large file you should try to run
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
Simply substitute FILEID and FILENAME with your custom values.
FILEID can be found in your file share link (after the /d/ as illustrated in the article mantioned above).
FILENAME is simply the name you want to save the download as. Remember to include the right extension. For Example FILENAME = my_file.pdf if the file is a pdf.
This is a known bug
It has been reported in this Issue Tracker post. This is caused because as you can read in the documentation:
(about download url)
Short lived download URL for the file. This field is only populated
for files with content stored in Google Drive; it is not populated for
Google Docs or shortcut files.
So you should use another field.
You can follow the report by clicking on the star next to the issue
number to give more priority to the bug and to receive updates.
As you can read in the comments of the report, the current workaround is:
Use webContentlink instead
or
Change www.googleapis.com to content.googleapis.com
I want to install elasticsearch5.6.4 on ubuntu 17.10. So I downloaded elasticsearch.deb and elasticsearch.deb.sha1. As the sturcture said in this guide, after I run
shasum -a 512 -c elasticsearch-6.2.1.tar.gz.sha512
I have gotten this error:
shasum: elasticsearch-5.6.4.deb.sha1: no properly formatted SHA1 checksum lines found
What does this error mean? and what should I do?
You are correct and I'm a bit puzzled (since I've written that section in the Elastic docs): shasum -a 512 works on other operating systems and checking the man page, I would have thought it should do the same on Ubuntu:
-a, --algorithm 1 (default), 224, 256, 384, 512, 512224, 512256
When verifying SHA-512/224 or SHA-512/256 checksums, indicate the
algorithm explicitly using the -a option, e.g.
shasum -a 512224 -c checksumfile
I'm not sure why shasum -a 512 doesn't work here, but these 3 alternatives all give you the correct result:
shasum -c elasticsearch-6.2.1.deb.sha512 -a 512
shasum -a 512256 -c elasticsearch-6.2.1.deb.sha512
sha512sum -c elasticsearch-6.2.1.deb.sha512
This answer is not so much for the OP (who is hopefully sorted now) but any passers by who encounter the error in the question.
The error
shasum: [CHECKSUM_FILENAME] : no properly formatted SHA[TYPE] checksum lines found
indicates that the checksum file passed to the -c flag is not formatted as
follows
a67eb6eeeff63ac77d34c2c86b0a3fa97f69a9d3f8c9d34c20036fa79cb4214d ./kbld-linux-amd64
Where
the first field is the expected checksum,
the second field is a ' ' character indicating that the file is to be checked as a text file (as opposed to being checked as a binary file or being checking in Universal mode which ignores newlines)
and the third field is the name of the file you likely just downloaded and whose integrity you want to verify
So in the example above the developers who created kbld supplied the above text on their release page to show the checksums that they calculated after they built the kbld binaries for various platforms.
I added the line for the linux build to a file called kbld_v0_7_0.checksum and then I ran the following in the directory where I downloaded the kbld-linux-amd64 binary
$ shasum -c kbld_v0_7_0.checksum -a 256
./kbld-linux-amd64: OK
The OK from shasum shows that the binary that I downloaded, ./kbld-linux-amd64 , generates the same sha256 checksum that was produced when the developers did their build which indicates that the files are, in all likelihood, identical
I am trying to set the permissions on my device driver file to read/write for all users using udev rules but it does work.
here the udev rules :
SUBSYSTEM=="lpc*", KERNEL=="lpc?*", DRIVER=="lpc", GROUP="users", MODE="0666"
When I test it using :
sudo udevadm test $(udevadm info -q path -n /dev/lpc_peach)2>&1
this is what i get at the bottom lines :
preserve permissions /dev/lpc_peach, 020600, uid=0, gid=0
preserve already existing symlink '/dev/char/248:0' to '../lpc_peach'
I can't identify what is wrong! Any help will be useful.
Edit 1 : When I run udevadm info -q all -n /dev/lpc_peach here is what I get:
P: /devices/virtual/lpc_spartan/lpc_peach
N: lpc_peach
E: DEVNAME=/dev/lpc_peach
E: DEVPATH=/devices/virtual/lpc_spartan/lpc_peach
E: MAJOR=247
E: MINOR=0
E: SUBSYSTEM=lpc_spartan
I'm attempting to create a fs within a file.
under linux it's very simple:
create a blank file size 8 gb
dd of=fsFile bs=1 count=0 seek=8G
"format" the drive:
mkfs.ext2 fsFile
works great.
however under cygwin running from /usr/sbin ./mkfs.ext2
has all kinds of weird errors (i assume because of some abstraction)
But with cygwin i get:
mkfs.ext2: Device size reported to be zero. Invalid partition specified, or
partition table wasn't reread after running fdisk, due to
a modified partition being busy and in use. You may need to reboot
to re-read your partition table.
or even worse (if i try to access a file through /cygdrive/...
mkfs.ext2: Bad file descriptor while trying to determine filesystem size
:(
please help,
Thanks
Well it seems that the way to solve it is to not use any path on the file you wish to modify.
doing that seems to have solved it.
also it seems that my 8 gig file does have a file size that's simply to big, it seems like it resets the size var i.e.
$ /usr/sbin/fsck.ext2 -f testFile8GiG
e2fsck 1.41.12 (17-May-2010)
The filesystem size (according to the superblock) is 2097152 blocks
The physical size of the device is 0 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort? no
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
testFile8GiG: 122/524288 files (61.5% non-contiguous), 253313/2097152 blocks
Thanks anyway