I've been trying to install a few ports ( wget, autoconf, coreutils, ... etc ) but it seems impossible !!! Here's what I have done step by step :
I'm using OS X 10.9.1 Mavericks and I've downloaded and installed macports using installation package (.pkg) from macports website. I had Xcode 5.0.2 already installed so I logged in my Apple iOS developer account, and downloaded command_line_tools_os_x_mavericks_for_xcode__late_october_2013.dmg and installed the package !
When I use
sudo port install coreutils I get the following error: Error: Port coreutils not found
I thought (And Googled of course) it must be because I haven't updated macports. Then I tried using self update using : sudo port -v selfupdate which by the way was not successful and I got the following error log :
---> Updating MacPorts base sources using rsync
rsync: failed to connect to rsync.macports.org: Operation timed out (60)
rsync error: error in socket IO (code 10) at /SourceCache/rsync/rsync42/rsync/clientserver.c(105) [receiver=2.6.9]
Command failed: /usr/bin/rsync -rtzv --delete-after rsync://rsync.macports.org/release/tarballs/base.tar /opt/local/var/macports/sources/rsync.macports.org/release/tarballs
Exit code: 10
Error: Error synchronizing MacPorts sources: command execution failed
To report a bug, follow the instructions in the guide:
http://guide.macports.org/#project.tickets
Error: /opt/local/bin/port: port selfupdate failed: Error synchronizing MacPorts sources: command execution failed`
According to failed to connect to server message, I thought it may be caused because of restrictions and sanctions applied to my IP Address which by the way is currently from Iran (I figured that out because I cannot even open macports website directly without using a proxy server) ! I used the instructions in the following URL to reroute the connection and make Macports connect through a proxy server :
http://samkhan13.wordpress.com/2012/06/15/make-macports-work-behind-proxy/
The instruction above tries to connect and fetch the port tree using a .tar.gz archive over HTTP ! I didn't got that connection error anymore but I got some Could not access the file error, so I downloaded that file manually, set up an Apache web server locally, and replaced that HTTP URL with my localhost link.
Everything seemed to be fine by using
sudo port -v sync instead of sudo port -v selfupdate
Here's how the log started :
---> Updating the ports tree
Synchronizing local ports tree from http://localhost/ports.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 24.6M 100 24.6M 0 0 98.9M 0 --:--:-- --:--:-- --:--:-- 99.1M
x ports/
x ports/gnome/
x ports/gnome/gnofract4d/
x ports/gnome/gnofract4d/Portfile
x ports/gnome/gnofract4d/files/
x ports/gnome/gnofract4d/files/patch-setup.py.diff
x ports/gnome/gnofract4d/files/patch-win.diff
x ports/gnome/gnofract4d/files/patch-fract4d_fractconfig.py.diff
x ports/gnome/gnofract4d/files/patch-fract4d-c-imageIO.cpp.diff
x ports/gnome/libchamplain/
x ports/gnome/libchamplain/Portfile
x ports/gnome/gconf/
x ports/gnome/gconf/Portfile
x ports/gnome/goocanvas/
x ports/gnome/goocanvas/Portfile
x ports/gnome/gstreamer1-gst-libav/
.
.
.
But in the end, I got some errors :
.
.
.
x ports/net/daemonlogger/Portfile
x ports/net/dibbler/
x ports/net/dibbler/Portfile
x ports/net/dibbler/files/
x ports/net/dibbler/files/0-enable-prefix.patch
x ports/net/dibbler/files/1-correct-man-pages.patch
x ports/PortIndex_darwin_11_i386/
x ports/PortIndex_darwin_11_i386/PortIndex.quick: gzip decompression failed
tar: Error exit delayed from previous errors.
Command failed: cd /opt/local/var/macports/sources/localhost/ports/.. && /usr/bin/tar -v -z -xf ports.tar.gz
Exit code: 1
Error: Extracting http://localhost/ports.tar.gz failed (command execution failed)
port sync failed: Synchronization of 1 source(s) failed
Now, I still cannot install any ports, and if I revert that default link in /opt/local/etc/macports/sources.conf to its original RSYNC one, everything returns to the way it was ( all errors, all messages, etc ... )
If I don't revert and go on using the file I have put on my localhost ( or using file:// to address the file directly ) , here's what happens when I try to install a port ( for example, Using sudo port install coreutils ) :
Port extract failed: ports/PortIndex_darwin_11_i386/PortIndex.quick: gzip decompression failed
tar: Error exit delayed from previous errors.
while executing
"macports::fetch_port $path 1"
(procedure "macports::getportdir" line 12)
invoked from within
"macports::getportdir $source"
(procedure "macports::getindex" line 4)
invoked from within
"macports::getindex $source"
(procedure "_mports_load_quickindex" line 11)
invoked from within
"_mports_load_quickindex"
(procedure "mportinit" line 577)
invoked from within
"mportinit ui_options global_options global_variations"
Error: /opt/local/bin/port: Failed to initialize MacPorts, Port extract failed: ports/PortIndex_darwin_11_i386/PortIndex.quick: gzip decompression failed
tar: Error exit delayed from previous errors.
I have Googled and read almost every solution that is suggested but NONE has worked out and I'm really stuck with this :(
Any NEW solution is really appreciated.
No replies, and I Found the solution myself!
The only way to redirect RSYNC requests through a proxy server is to tunnel over an L2TP VPN connection ( not PPTP ). That's the only way to make Macports work behind a proxy server.
Hope this can help other guys who are stuck with this weird connection method.
Instead of the main MacPorts mirror (which is sponsored by MacOSForge, which is run by Apple, which is thus bound to US law and export restrictions to Iran), you can use an alternate rsync mirror from the list at http://trac.macports.org/wiki/Mirrors.
If none of the rsync mirrors work for you, also read the FAQ entry for this very question: http://trac.macports.org/wiki/FAQ#selfupdatefails.
Related
I am running the command - curl https://cli-assets.heroku.com/install-ubuntu.sh | sh
Which throws the following error -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1232 100 1232 0 0 5133 0 --:--:-- --:--:-- --:--:-- 5112
This script requires superuser access to install apt packages.
You will be prompted for your password by sudo.
+ dpkg -s apt-transport-https
+ echo ''
sh: line 4: /etc/apt/sources.list.d/heroku.list: No such file or directory
I also ran sudo snap install --classic heroku which returned sudo: snap: command not found.
Then I ran sudo apt install snapd which returned the following -
apt: invalid flag: install
Usage: apt <apt and javac options> <source files>
where apt options include:
-classpath <path> Specify where to find user class files and annotation processor factories
-cp <path> Specify where to find user class files and annotation processor factories
-d <path> Specify where to place processor and javac generated class files
-s <path> Specify where to place processor generated source files
-source <release> Provide source compatibility with specified release
-version Version information
-help Print a synopsis of standard options; use javac -help for more options
-X Print a synopsis of nonstandard options
-J<flag> Pass <flag> directly to the runtime system
-A[key[=value]] Options to pass to annotation processors
-nocompile Do not compile source files to class files
-print Print out textual representation of specified types
-factorypath <path> Specify where to find annotation processor factories
-factory <class> Name of AnnotationProcessorFactory to use; bypasses default discovery process
See javac -help for information on javac options.
warning: The apt tool and its associated API are planned to be
removed in the next major JDK release. These features have been
superseded by javac and the standardized annotation processing API,
javax.annotation.processing and javax.lang.model. Users are
recommended to migrate to the annotation processing features of
javac; see the javac man page for more information.
Finally, I ran wget 0- wget https://toolbelt.heroku.com/install-ubuntu.sh | sh which returned
--2020-09-10 18:15:53-- http://0-/
Resolving 0- (0-)... failed: Name or service not known.
wget: unable to resolve host address ‘0-’
--2020-09-10 18:15:54-- http://wget/
Resolving wget (wget)... failed: Name or service not known.
wget: unable to resolve host address ‘wget’
--2020-09-10 18:15:54-- https://toolbelt.heroku.com/install-ubuntu.sh
Resolving toolbelt.heroku.com (toolbelt.heroku.com)... 54.164.74.108, 107.23.162.152, 34.194.108.77, ...
Connecting to toolbelt.heroku.com (toolbelt.heroku.com)|54.164.74.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 719 [text/plain]
Saving to: ‘install-ubuntu.sh’
install-ubuntu.sh 100%[=============================================================>] 719 --.-KB/s in 0s
2020-09-10 18:15:54 (105 MB/s) - ‘install-ubuntu.sh’ saved [719/719]
FINISHED --2020-09-10 18:15:54--
Total wall clock time: 0.4s
Downloaded: 1 files, 719 in 0s (105 MB/s)
So then, I ran bash install-ubuntu.sh which returned -
This script requires superuser access to install apt packages.
You will be prompted for your password by sudo.
sh: line 3: /etc/apt/sources.list.d/heroku.list: No such file or directory
sh: line 6: apt-key: command not found
--2020-09-10 18:16:51-- https://toolbelt.heroku.com/apt/release.key
Resolving toolbelt.heroku.com (toolbelt.heroku.com)... 54.145.36.98, 54.164.74.108, 107.23.162.152, ...
Connecting to toolbelt.heroku.com (toolbelt.heroku.com)|54.145.36.98|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1737 (1.7K) [application/octet-stream]
Saving to: ‘STDOUT’
- 0%[ ] 0 --.-KB/s in 0s
Cannot write to ‘-’ (Success).
sh: line 9: apt-get: command not found
sh: line 12: apt-get: command not found
I am taking an online course on Upskill and am on video #125. Please advise how to progress.
Thanks for your time and help.
I have a gist I use specifically for setting up rails on Cloud9 (though I haven't updated it for Rails 6 yet). Recommend you read through it twice before you try to do it. You may need to go out of order in some cases, as exact approach depends on if you are cloning an existing repo or building a new app.
https://gist.github.com/MyklClason/791d6b14606bc56e72eba2995aab8e76
You probably don't need snap.
Also useful bash aliases for Cloud9:
https://gist.github.com/MyklClason/d71a39ace28b9ec9f0ad
As for your actual issue. Heroku toolbelt is obsolete, use this instead:
wget -qO- https://cli-assets.heroku.com/install-ubuntu.sh | sh
Also it's often best to just look online and check how to install the heroku cli (or anything really) for your OS using the offical documentation. Though that may not work for older setup. However, heroku is something where you basically have to use the newest version otherwise you are going to run into problems.
If you didn't figure this out...
nvm i v8
Followed by...
npm install -g heroku
I downloaded greenplum-cc-web-4.6.1-LINUX-x86_64.zip for my greenplum db with 5.18, and followed this link (https://gpcc.docs.pivotal.io/460/topics/setup-collection-agents.html) to install command center. Everything is OK while there is a failure about gpccinstall. It showed following errors:
RunCommandOnEachHost fail on host: client-gp03.bj
Error when unzip remote binary on sdw3 bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp00.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp01.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp02.bj bin/gpccws
bin/ccagent
bin/gpcc
conf/app.conf
gpcc_path.sh
bin/start_agent.sh
bin/queryinfocat.sh
bin/gpcc_md5
ccdata/
alert-email/alertTemplate.html
alert-email/send_alert.sh.sample
languages/
languages/zh.json
languages/en.json
Error when unzip remote binary on client-gp03.bj Warning: the ECDSA host key for 'client-gp03.bj' differs from the key for the IP address '10.136.173.8'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:10
Matching host key in /home/gpadmin/.ssh/known_hosts:17
tar: bin/gpccws: Cannot open: File exists
tar: Exiting with failure status due to previous errors
RunCommandOnEachHost failure happened
Can anyone encounter this issue before? I did some search in google and pivotal community, but failed to find some solution. Any help is appreciated.
BTW, when I ignored above errors, and continued, I found the gpcc web server can be started successfully. And when I logged in, only "Query Monitor" UI section show one warning: "GPCC is no longer receiving updates. Check your network status or gpcc status and refresh this page.", other part of UI seems OK.
From here:
Error when unzip remote binary on client-gp03.bj Warning: the ECDSA host key for 'client-gp03.bj' differs from the key for the IP address '10.136.173.8'
Offending key for IP in /home/gpadmin/.ssh/known_hosts:10
Matching host key in /home/gpadmin/.ssh/known_hosts:17
tar: bin/gpccws: Cannot open: File exists
tar: Exiting with failure status due to previous errors
You have duplicate ssh fingerprint keys in your /home/gpadmin/.ssh/known_hosts file. I recommend removing both lines 10 and 17 from that file, then running ssh-keyscan client-gp03.bj >> /home/gpadmin/.ssh/known_hosts
After this is complete, try ssh-ing to the host, to see that the fingerprint error is cleared up, and if so, try the gpcc installation again.
I've come across an error starting the hyperledger-composer network that isn't answered in the composer-wiki.
✖ Starting business network definition. This may take a minute...
Error: Error trying to start business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: transaction returned with failure: can't find PEM header: undefined
Command failed
Checking pre-requisites,
Fabric 1.2
Composer 0.20.4
Node 8.12.0
Docker 18.01.1
"composer network install" was successful, with file appearing in the docker peer at /var/hyperleder/production/chaincodes
After running the "composer network start" command, a "docker ps" shows new docker instance with name:
dev-peer0.org1.example.com-<<business-network-name>>-0.0.7
But any attempt to ping this results in a failure like this:
Error: Error trying to ping. Error: make sure the chaincode <<business-network-name>> has been successfully instantiated and try again: getccdata composerchannel/<<business-network-name>> responded with error: could not find chaincode with name '<<business-network-name>>'
Checking the log of the dev-peer0, it ends with the following:
2018-11-05T05:03:18.227Z [4264161f] ERROR :Composer :Init() can't find PEM header: undefined
2018-11-05T05:03:18.227Z [4264161f] VERBOSE :Composer :#PERF Init() Total (ms) duration for txnID [4264161fc30a61c70884d4c7efb460fea6a755d07bc4852875c393346795227a]: 929.00
2018-11-05T05:03:18.228Z ERROR [lib/handler.js] [composerchannel-4264161f]Calling chaincode Init() returned error response [can't find PEM header: undefined]. Sending ERROR message back to peer
The corresponding error in the peer0 log is a big larger:
2018-11-05 05:03:18.229 UTC [endorser] SimulateProposal -> ERRO 439d [composerchannel][4264161f] failed to invoke chaincode name:"lscc" , error: transaction returned with failure: can't find PEM header: undefined
github.com/hyperledger/fabric/core/chaincode.(*ChaincodeSupport).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/chaincode_support.go:202
github.com/hyperledger/fabric/core/endorser.(*SupportImpl).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/support.go:131
github.com/hyperledger/fabric/core/endorser.(*Endorser).callChaincode
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:173
github.com/hyperledger/fabric/core/endorser.(*Endorser).SimulateProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:287
github.com/hyperledger/fabric/core/endorser.(*Endorser).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:501
github.com/hyperledger/fabric/core/handlers/auth/filter.(*expirationCheckFilter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/expiration.go:61
github.com/hyperledger/fabric/core/handlers/auth/filter.(*filter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/filter.go:31
github.com/hyperledger/fabric/protos/peer._Endorser_ProcessProposal_Handler
/opt/gopath/src/github.com/hyperledger/fabric/protos/peer/peer.pb.go:112
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).processUnaryRPC
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:923
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).handleStream
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:1148
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:637
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
2018-11-05 05:03:18.229 UTC [endorser] SimulateProposal -> DEBU 439e [composerchannel][4264161f] Exit
Since this last worked I have updated composer from 0.19 to 0.20.4, and taken Fabric from 1.1 to 1.2.
Googling suggests that this kind of error "can't find PEM header: undefined" is associated with a change in key signing. After tearing down Fabric I re-ran ./createPeerAdminCard.sh - is there another card or similar that needs to be re-created to accomodate the latest versions?
Thanks to #R Thatcher for putting me onto the right direction. This was all down to mismatching cards, and was resolved by clearing out everything and starting again.
Specifically, in /fabric-dev-servers:
./stopFabric.sh
./teardownFabric.sh
composer card list
composer card delete -c admin#<business-network-name>
composer card delete -c PeerAdmin#hlfv1
./startFabric.sh
./createPeerAdminCard.sh
Then changing into the composer/business-network-name directory:
composer network install --card PeerAdmin#hlfv1 --archiveFile business-network-name\#0.0.7.bna
composer network start -c PeerAdmin#hlfv1 -n business-network-name -V 0.0.7 -A admin -S adminpw --file networkadmin.card
composer card import --file networkadmin.card --card admin#business-network-name
composer network ping -c admin#business-network-name
So yes, it was about mismatching cards and not cleaning these up as part of a new deployment.
Although not part of the original problem, it's also worth noting that the -A and -S parameters of the composer network start command HAD to be set to admin and adminpw respectively. See composer issue #3781.
Answering the the last remark from #Capn Sparrow
"the -A and -S parameters of the composer network start command HAD to be set to admin and adminpw respectively."
This is the correct and expected behaviour :-)
with the composer network start command the -A and -S are specifying an existing user in the CA that we want a new set of Credentials (certificate and keys) for which is then bound to a Composer System participant.
When you use the 'standard development fabric' this has a CA configured with a user called 'admin' with a secret of 'adminpw'. If you had build your own Fabric from scratch you could choose the name and secret of your first default user. Alternatively you could work with the fabric-ca client software to create additional users in the CA.
I have generated a Yocto image to be used on all my target devices. When that image is running on target devices, it must be able to be updated using a rpm remote repository through https protocol.
To try doing that, I have added a dnf bbappend to my custom layer:
$ cat recipes-devtools/dnf/dnf_%.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += " \
file://yocto-adv-rpm.repo \
"
do_install_append () {
install -d ${D}/etc/yum.repos.d
install -m 0600 ${WORKDIR}/yocto-adv-rpm.repo ${D}/etc/yum.repos.d/yocto-adv-rpm.repo
}
FILES_${PN} += "/etc/yum.repos.d"
This is the content of repository configuration file included by dnf bbappend recipe:
$ cat recipes-devtools/dnf/files/yocto-adv-rpm.repo
[yocto-adv-rpm]
name=Rocko Yocto Repo
baseurl=https://storage.googleapis.com/my_repo/
gpgkey=https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
enabled=1
gpgcheck=1
This repository configuration breaks the build process of the image. When I try to build myimage recipe, I always get this error:
ERROR: myimage-1.0-r0 do_rootfs: [log_check] myimage: found 1 error message in the logfile:
[log_check] Failed to synchronize cache for repo 'yocto-adv-rpm', disabling.
ERROR: myimage-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /home/yocto/yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/log.do_rootfs.731
ERROR: Task (/home/yocto/yocto/sources/meta-mylayer/recipes-images/myimage.bb:do_rootfs) failed with exit code '1'
However, when I replace the "https" by "http" in "baseurl" variable:
baseurl=http://storage.googleapis.com/my_repo/
Then the myimage recipe is built fine.
The host machine can download files from the https repository using wget:
$ wget https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
Previous commands works fine, so the problem is not related with the host machine, I think it must be something related with google certificates and yocto stuff.
I found some relevant information inside this file:
yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/dnf.librepo.log
The relevant part:
15:56:41 lr_download: Downloading started
15:56:41 check_transfer_statuses: Transfer finished: repodata/repomd.xml (Effective url: https://storage.googleapis.com/my_repo/repodata/repomd.xml)
15:56:41 check_finished_transfer_status: Fatal error - Curl code (77): Problem with the SSL CA cert (path? access rights?) for https://storage.googleapis.com/my_repo/repodata/repomd.xml [error setting certificate verify locations:
CAfile: /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/ca-certificates.crt
CApath: none]
15:56:41 lr_yum_download_repomd: repomd.xml download was unsuccessful
Can some of you provide any useful advice to try to fix this?
Thank you in advance for your time! :-)
I finally fixed my issue removing completely my dnf bbappend recipe from my custom layer and adding this variable to my distro.conf file:
PACKAGE_FEED_URIS = "https://storage.googleapis.com/my_repo/"
After that, at the end of the build process the image contains a valid /etc/yum.d/oe-remote-repo file and all the necesary stuff to manage it. There is no need to copy "ca-certificates.crt" manually at all.
Also, it's important to execute this command after finishing the build of the image:
$ bitbake package-index
This command generates a "repodata" directory within the package feed needed by the target device once it uses the repo to update packages using dnf client.
I found a temporal hack to fix my issue:
$ cp /etc/ssl/certs/ca-certificates.crt /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/
After that, I was finally able to build the image using the "https" repo.
Now I am in the process of fixing this issue in the right way. I'll come back with the final solution.
I'm compiling a basic example (as much as using bare X could be simple...) using the X11's RECORD extension on the latest version of Ubuntu, and I'm getting the following error:
RECORD extension for local server is version is 1.13
X Error of failed request: XRecordBadContext
Major opcode of failed request: 135 (RECORD)
Minor opcode of failed request: 5 (XRecordEnableContext)
Context in failed request: 0x17
Serial number of failed request: 10
Current serial number in output stream: 10
Any hints about what's wrong?
I believe that the XRECORD extension is broken in current servers (see https://bugs.launchpad.net/ubuntu/+source/xorg-server/+bug/315456, although I hit problems with cnee long after the date the fix was supposed to be available). You might want to try installing an older Linux distribution in a virtual machine and giving the sample code a try there.