Unable to get heroku started on cloud9 ide - ruby

I am running the command - curl https://cli-assets.heroku.com/install-ubuntu.sh | sh
Which throws the following error -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1232 100 1232 0 0 5133 0 --:--:-- --:--:-- --:--:-- 5112
This script requires superuser access to install apt packages.
You will be prompted for your password by sudo.
+ dpkg -s apt-transport-https
+ echo ''
sh: line 4: /etc/apt/sources.list.d/heroku.list: No such file or directory
I also ran sudo snap install --classic heroku which returned sudo: snap: command not found.
Then I ran sudo apt install snapd which returned the following -
apt: invalid flag: install
Usage: apt <apt and javac options> <source files>
where apt options include:
-classpath <path> Specify where to find user class files and annotation processor factories
-cp <path> Specify where to find user class files and annotation processor factories
-d <path> Specify where to place processor and javac generated class files
-s <path> Specify where to place processor generated source files
-source <release> Provide source compatibility with specified release
-version Version information
-help Print a synopsis of standard options; use javac -help for more options
-X Print a synopsis of nonstandard options
-J<flag> Pass <flag> directly to the runtime system
-A[key[=value]] Options to pass to annotation processors
-nocompile Do not compile source files to class files
-print Print out textual representation of specified types
-factorypath <path> Specify where to find annotation processor factories
-factory <class> Name of AnnotationProcessorFactory to use; bypasses default discovery process
See javac -help for information on javac options.
warning: The apt tool and its associated API are planned to be
removed in the next major JDK release. These features have been
superseded by javac and the standardized annotation processing API,
javax.annotation.processing and javax.lang.model. Users are
recommended to migrate to the annotation processing features of
javac; see the javac man page for more information.
Finally, I ran wget 0- wget https://toolbelt.heroku.com/install-ubuntu.sh | sh which returned
--2020-09-10 18:15:53-- http://0-/
Resolving 0- (0-)... failed: Name or service not known.
wget: unable to resolve host address ‘0-’
--2020-09-10 18:15:54-- http://wget/
Resolving wget (wget)... failed: Name or service not known.
wget: unable to resolve host address ‘wget’
--2020-09-10 18:15:54-- https://toolbelt.heroku.com/install-ubuntu.sh
Resolving toolbelt.heroku.com (toolbelt.heroku.com)... 54.164.74.108, 107.23.162.152, 34.194.108.77, ...
Connecting to toolbelt.heroku.com (toolbelt.heroku.com)|54.164.74.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 719 [text/plain]
Saving to: ‘install-ubuntu.sh’
install-ubuntu.sh 100%[=============================================================>] 719 --.-KB/s in 0s
2020-09-10 18:15:54 (105 MB/s) - ‘install-ubuntu.sh’ saved [719/719]
FINISHED --2020-09-10 18:15:54--
Total wall clock time: 0.4s
Downloaded: 1 files, 719 in 0s (105 MB/s)
So then, I ran bash install-ubuntu.sh which returned -
This script requires superuser access to install apt packages.
You will be prompted for your password by sudo.
sh: line 3: /etc/apt/sources.list.d/heroku.list: No such file or directory
sh: line 6: apt-key: command not found
--2020-09-10 18:16:51-- https://toolbelt.heroku.com/apt/release.key
Resolving toolbelt.heroku.com (toolbelt.heroku.com)... 54.145.36.98, 54.164.74.108, 107.23.162.152, ...
Connecting to toolbelt.heroku.com (toolbelt.heroku.com)|54.145.36.98|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1737 (1.7K) [application/octet-stream]
Saving to: ‘STDOUT’
- 0%[ ] 0 --.-KB/s in 0s
Cannot write to ‘-’ (Success).
sh: line 9: apt-get: command not found
sh: line 12: apt-get: command not found
I am taking an online course on Upskill and am on video #125. Please advise how to progress.
Thanks for your time and help.

I have a gist I use specifically for setting up rails on Cloud9 (though I haven't updated it for Rails 6 yet). Recommend you read through it twice before you try to do it. You may need to go out of order in some cases, as exact approach depends on if you are cloning an existing repo or building a new app.
https://gist.github.com/MyklClason/791d6b14606bc56e72eba2995aab8e76
You probably don't need snap.
Also useful bash aliases for Cloud9:
https://gist.github.com/MyklClason/d71a39ace28b9ec9f0ad
As for your actual issue. Heroku toolbelt is obsolete, use this instead:
wget -qO- https://cli-assets.heroku.com/install-ubuntu.sh | sh
Also it's often best to just look online and check how to install the heroku cli (or anything really) for your OS using the offical documentation. Though that may not work for older setup. However, heroku is something where you basically have to use the newest version otherwise you are going to run into problems.

If you didn't figure this out...
nvm i v8
Followed by...
npm install -g heroku

Related

Ansible on SLES: zypper plugin not able to install PostgreSQL 14

I am trying my hand on Ansible after having a very nice training in it. Currently, my task is to create a playbook that sets up a PostgreSQL cluster (with Patroni and etcd).
However, while installing PostgreSQL should be a pretty easy task, doing it using the zypper plugin throws an error. First, the part of the playbook that should install PostgreSQL:
- name: Installation PostgreSQL 14 Latest ohne Recommendations
become: true
zypper:
disable_recommends: true
name:
postgresql14-server
postgresql14-contrib
postgresql14-devel
update_cache: true
when: ansible_host in pgservers
The error message given is this:
fatal: [goeccdb22l]: FAILED! => {"changed": false, "cmd": ["/usr/bin/zypper", "--quiet", "--non-interactive", "--xmlout", "install", "--type", "package", "--auto-agree-with-licenses", "--no-recommends", "--", "+postgresql14-server postgresql14-contrib postgresql14-devel"], "msg": "No provider of '+postgresql14-server postgresql14-contrib postgresql14-devel' found.", "rc": 104, "stderr": "", "stderr_lines": [], "stdout": "<?xml version='1.0'?>\n<stream>\n<message type=\"error\">No provider of &apos;+postgresql14-server postgresql14-contrib postgresql14-devel&apos; found.</message>\n</stream>\n", "stdout_lines": ["<?xml version='1.0'?>", "<stream>", "<message type=\"error\">No provider of &apos;+postgresql14-server postgresql14-contrib postgresql14-devel&apos; found.</message>", "</stream>"]}
Let's extract the error message:
"msg": "No provider of '+postgresql14-server postgresql14-contrib postgresql14-devel' found."
I tried to replicate the problem using the shell on the target server. However, running the command seems to be able to install the packages:
ansible#goeccdb22l:~> sudo /usr/bin/zypper install --type package --auto-agree-with-licenses --no-recommends -- +postgresql14-server postgresql14-contrib postgresql14-devel
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 12 NEW packages are going to be installed:
libecpg6 libopenssl-1_1-devel libpq5 postgresql postgresql14 postgresql14-contrib postgresql14-devel postgresql14-server postgresql-contrib postgresql-devel postgresql-server zlib-devel
The following package needs additional customer contract to get support:
postgresql14
12 new packages to install.
Overall download size: 8.0 MiB. Already cached: 0 B. After the operation, additional 35.4 MiB will be used.
Continue? [y/n/v/...? shows all options] (y):
I've removed only the --quiet and --non-interactive options from the command, but kept all other given options.
The best idea I have is that the user/privilege escalation workings could be different from me logging in as the Ansible user to the target and just using sudo before the command.
Edit 1: I have developed an idea what the problem could be. As I mentioned above, when I tested the command, I removed two options: --quiet and --non-interactive. Testing the command with those two options gives the message:
The flag --quiet is not known.
However, using man zypper, I can clearly see that --quiet is a documented option:
-q, --quiet
Suppress normal output. Brief (esp. result notification) messages and error messages will still be printed, though. If used together with conflicting --verbose option, the --verbose option takes preference.
Now, my idea is that Ansible calls the command it documents in the return XML, but that because somehow --quiet is not understood it interprets that as nothing providing the requested package list. So that would leave two questions:
Why is --quiet not understood, yet documented? Is that a problem of SLES vs. OpenSuse?
How to work around that?
As the Ansible zypper module has no option to suppress the --quiet option I don't see any chance of working around it with parameters. The last option would be to split the zypper task into smaller shell tasks which I would like to avoid if possible.
So, with the help of a knowledgeable sysadmin I was able to diagnose the problem.
The list of packages documented above was not in fact a list. As I missed the dashes in front of the packages, Ansible accepted the packages with the newlines and everything as one package name and tried to install it.
The solution is to change the packages into a list of packages by prefixing them with dashes/minus signs.
The problem with --quiet was that it is a positional argument, and I used the wrong position when testing it.

Error starting hyperledger-composer network after Fabric and Composer version upgrade

I've come across an error starting the hyperledger-composer network that isn't answered in the composer-wiki.
✖ Starting business network definition. This may take a minute...
Error: Error trying to start business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: transaction returned with failure: can't find PEM header: undefined
Command failed
Checking pre-requisites,
Fabric 1.2
Composer 0.20.4
Node 8.12.0
Docker 18.01.1
"composer network install" was successful, with file appearing in the docker peer at /var/hyperleder/production/chaincodes
After running the "composer network start" command, a "docker ps" shows new docker instance with name:
dev-peer0.org1.example.com-<<business-network-name>>-0.0.7
But any attempt to ping this results in a failure like this:
Error: Error trying to ping. Error: make sure the chaincode <<business-network-name>> has been successfully instantiated and try again: getccdata composerchannel/<<business-network-name>> responded with error: could not find chaincode with name '<<business-network-name>>'
Checking the log of the dev-peer0, it ends with the following:
2018-11-05T05:03:18.227Z [4264161f] ERROR :Composer :Init() can't find PEM header: undefined
2018-11-05T05:03:18.227Z [4264161f] VERBOSE :Composer :#PERF Init() Total (ms) duration for txnID [4264161fc30a61c70884d4c7efb460fea6a755d07bc4852875c393346795227a]: 929.00
2018-11-05T05:03:18.228Z ERROR [lib/handler.js] [composerchannel-4264161f]Calling chaincode Init() returned error response [can't find PEM header: undefined]. Sending ERROR message back to peer
The corresponding error in the peer0 log is a big larger:
2018-11-05 05:03:18.229 UTC [endorser] SimulateProposal -> ERRO 439d [composerchannel][4264161f] failed to invoke chaincode name:"lscc" , error: transaction returned with failure: can't find PEM header: undefined
github.com/hyperledger/fabric/core/chaincode.(*ChaincodeSupport).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/chaincode_support.go:202
github.com/hyperledger/fabric/core/endorser.(*SupportImpl).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/support.go:131
github.com/hyperledger/fabric/core/endorser.(*Endorser).callChaincode
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:173
github.com/hyperledger/fabric/core/endorser.(*Endorser).SimulateProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:287
github.com/hyperledger/fabric/core/endorser.(*Endorser).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:501
github.com/hyperledger/fabric/core/handlers/auth/filter.(*expirationCheckFilter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/expiration.go:61
github.com/hyperledger/fabric/core/handlers/auth/filter.(*filter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/filter.go:31
github.com/hyperledger/fabric/protos/peer._Endorser_ProcessProposal_Handler
/opt/gopath/src/github.com/hyperledger/fabric/protos/peer/peer.pb.go:112
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).processUnaryRPC
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:923
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).handleStream
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:1148
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:637
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
2018-11-05 05:03:18.229 UTC [endorser] SimulateProposal -> DEBU 439e [composerchannel][4264161f] Exit
Since this last worked I have updated composer from 0.19 to 0.20.4, and taken Fabric from 1.1 to 1.2.
Googling suggests that this kind of error "can't find PEM header: undefined" is associated with a change in key signing. After tearing down Fabric I re-ran ./createPeerAdminCard.sh - is there another card or similar that needs to be re-created to accomodate the latest versions?
Thanks to #R Thatcher for putting me onto the right direction. This was all down to mismatching cards, and was resolved by clearing out everything and starting again.
Specifically, in /fabric-dev-servers:
./stopFabric.sh
./teardownFabric.sh
composer card list
composer card delete -c admin#<business-network-name>
composer card delete -c PeerAdmin#hlfv1
./startFabric.sh
./createPeerAdminCard.sh
Then changing into the composer/business-network-name directory:
composer network install --card PeerAdmin#hlfv1 --archiveFile business-network-name\#0.0.7.bna
composer network start -c PeerAdmin#hlfv1 -n business-network-name -V 0.0.7 -A admin -S adminpw --file networkadmin.card
composer card import --file networkadmin.card --card admin#business-network-name
composer network ping -c admin#business-network-name
So yes, it was about mismatching cards and not cleaning these up as part of a new deployment.
Although not part of the original problem, it's also worth noting that the -A and -S parameters of the composer network start command HAD to be set to admin and adminpw respectively. See composer issue #3781.
Answering the the last remark from #Capn Sparrow
"the -A and -S parameters of the composer network start command HAD to be set to admin and adminpw respectively."
This is the correct and expected behaviour :-)
with the composer network start command the -A and -S are specifying an existing user in the CA that we want a new set of Credentials (certificate and keys) for which is then bound to a Composer System participant.
When you use the 'standard development fabric' this has a CA configured with a user called 'admin' with a secret of 'adminpw'. If you had build your own Fabric from scratch you could choose the name and secret of your first default user. Alternatively you could work with the fabric-ca client software to create additional users in the CA.

Yocto build broken when setting a remote rpm repository with https

I have generated a Yocto image to be used on all my target devices. When that image is running on target devices, it must be able to be updated using a rpm remote repository through https protocol.
To try doing that, I have added a dnf bbappend to my custom layer:
$ cat recipes-devtools/dnf/dnf_%.bbappend
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI += " \
file://yocto-adv-rpm.repo \
"
do_install_append () {
install -d ${D}/etc/yum.repos.d
install -m 0600 ${WORKDIR}/yocto-adv-rpm.repo ${D}/etc/yum.repos.d/yocto-adv-rpm.repo
}
FILES_${PN} += "/etc/yum.repos.d"
This is the content of repository configuration file included by dnf bbappend recipe:
$ cat recipes-devtools/dnf/files/yocto-adv-rpm.repo
[yocto-adv-rpm]
name=Rocko Yocto Repo
baseurl=https://storage.googleapis.com/my_repo/
gpgkey=https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
enabled=1
gpgcheck=1
This repository configuration breaks the build process of the image. When I try to build myimage recipe, I always get this error:
ERROR: myimage-1.0-r0 do_rootfs: [log_check] myimage: found 1 error message in the logfile:
[log_check] Failed to synchronize cache for repo 'yocto-adv-rpm', disabling.
ERROR: myimage-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /home/yocto/yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/log.do_rootfs.731
ERROR: Task (/home/yocto/yocto/sources/meta-mylayer/recipes-images/myimage.bb:do_rootfs) failed with exit code '1'
However, when I replace the "https" by "http" in "baseurl" variable:
baseurl=http://storage.googleapis.com/my_repo/
Then the myimage recipe is built fine.
The host machine can download files from the https repository using wget:
$ wget https://storage.googleapis.com/my_repo/PACKAGEFEED-GPG-KEY-rocko
Previous commands works fine, so the problem is not related with the host machine, I think it must be something related with google certificates and yocto stuff.
I found some relevant information inside this file:
yocto/build/tmp/work/machine-poky-linux/myimage/1.0-r0/temp/dnf.librepo.log
The relevant part:
15:56:41 lr_download: Downloading started
15:56:41 check_transfer_statuses: Transfer finished: repodata/repomd.xml (Effective url: https://storage.googleapis.com/my_repo/repodata/repomd.xml)
15:56:41 check_finished_transfer_status: Fatal error - Curl code (77): Problem with the SSL CA cert (path? access rights?) for https://storage.googleapis.com/my_repo/repodata/repomd.xml [error setting certificate verify locations:
CAfile: /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/ca-certificates.crt
CApath: none]
15:56:41 lr_yum_download_repomd: repomd.xml download was unsuccessful
Can some of you provide any useful advice to try to fix this?
Thank you in advance for your time! :-)
I finally fixed my issue removing completely my dnf bbappend recipe from my custom layer and adding this variable to my distro.conf file:
PACKAGE_FEED_URIS = "https://storage.googleapis.com/my_repo/"
After that, at the end of the build process the image contains a valid /etc/yum.d/oe-remote-repo file and all the necesary stuff to manage it. There is no need to copy "ca-certificates.crt" manually at all.
Also, it's important to execute this command after finishing the build of the image:
$ bitbake package-index
This command generates a "repodata" directory within the package feed needed by the target device once it uses the repo to update packages using dnf client.
I found a temporal hack to fix my issue:
$ cp /etc/ssl/certs/ca-certificates.crt /home/yocto/yocto/build/tmp/work/x86_64-linux/curl-native/7.54.1-r0/recipe-sysroot-native/etc/ssl/certs/
After that, I was finally able to build the image using the "https" repo.
Now I am in the process of fixing this issue in the right way. I'll come back with the final solution.

Logstash install error: can't get unique system GID (no more available GIDs)

I am trying to install logstash with yum on a red hat vm, I already have the logstash.repo file setup according to the guide and i ran
yum install logstash
but I get the following error after it downloads everything
...
logstash-2.3.2-1.noarch.rpm | 72 MB 00:52
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
groupadd: Can't get unique system GID (no more available GIDs)
useradd: group 'logstash' does not exist
error: %pre(logstash-1:2.3.2-1.noarch) scriptlet failed, exit status 6
Error in PREIN scriptlet in rpm package 1:logstash-2.3.2-1.noarch
error: install: %pre scriptlet failed (2), skipping logstash-1:2.3.2-1
Verifying : 1:logstash-2.3.2-1.noarch 1/1
Failed:
logstash.noarch 1:2.3.2-1
Complete!
I can't find much information about this. Any suggestions?
groupadd determines gids for the creation of regular groups from the /etc/login.defs file.
In my centos 6 box. /etc/login.defs contains following two lines:
#
# Min/max values for automatic gid selection in groupadd
#
GID_MIN 500
GID_MAX 60000
For system accounts add these two lines to your /etc/login.defs
# System accounts
SYS_GID_MIN 100
SYS_GID_MAX 499
I updated the SYS_GID_MAX Value and it worked for me.

Cannot Update Macports Port Tree

I've been trying to install a few ports ( wget, autoconf, coreutils, ... etc ) but it seems impossible !!! Here's what I have done step by step :
I'm using OS X 10.9.1 Mavericks and I've downloaded and installed macports using installation package (.pkg) from macports website. I had Xcode 5.0.2 already installed so I logged in my Apple iOS developer account, and downloaded command_line_tools_os_x_mavericks_for_xcode__late_october_2013.dmg and installed the package !
When I use
sudo port install coreutils I get the following error: Error: Port coreutils not found
I thought (And Googled of course) it must be because I haven't updated macports. Then I tried using self update using : sudo port -v selfupdate which by the way was not successful and I got the following error log :
---> Updating MacPorts base sources using rsync
rsync: failed to connect to rsync.macports.org: Operation timed out (60)
rsync error: error in socket IO (code 10) at /SourceCache/rsync/rsync42/rsync/clientserver.c(105) [receiver=2.6.9]
Command failed: /usr/bin/rsync -rtzv --delete-after rsync://rsync.macports.org/release/tarballs/base.tar /opt/local/var/macports/sources/rsync.macports.org/release/tarballs
Exit code: 10
Error: Error synchronizing MacPorts sources: command execution failed
To report a bug, follow the instructions in the guide:
http://guide.macports.org/#project.tickets
Error: /opt/local/bin/port: port selfupdate failed: Error synchronizing MacPorts sources: command execution failed`
According to failed to connect to server message, I thought it may be caused because of restrictions and sanctions applied to my IP Address which by the way is currently from Iran (I figured that out because I cannot even open macports website directly without using a proxy server) ! I used the instructions in the following URL to reroute the connection and make Macports connect through a proxy server :
http://samkhan13.wordpress.com/2012/06/15/make-macports-work-behind-proxy/
The instruction above tries to connect and fetch the port tree using a .tar.gz archive over HTTP ! I didn't got that connection error anymore but I got some Could not access the file error, so I downloaded that file manually, set up an Apache web server locally, and replaced that HTTP URL with my localhost link.
Everything seemed to be fine by using
sudo port -v sync instead of sudo port -v selfupdate
Here's how the log started :
---> Updating the ports tree
Synchronizing local ports tree from http://localhost/ports.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 24.6M 100 24.6M 0 0 98.9M 0 --:--:-- --:--:-- --:--:-- 99.1M
x ports/
x ports/gnome/
x ports/gnome/gnofract4d/
x ports/gnome/gnofract4d/Portfile
x ports/gnome/gnofract4d/files/
x ports/gnome/gnofract4d/files/patch-setup.py.diff
x ports/gnome/gnofract4d/files/patch-win.diff
x ports/gnome/gnofract4d/files/patch-fract4d_fractconfig.py.diff
x ports/gnome/gnofract4d/files/patch-fract4d-c-imageIO.cpp.diff
x ports/gnome/libchamplain/
x ports/gnome/libchamplain/Portfile
x ports/gnome/gconf/
x ports/gnome/gconf/Portfile
x ports/gnome/goocanvas/
x ports/gnome/goocanvas/Portfile
x ports/gnome/gstreamer1-gst-libav/
.
.
.
But in the end, I got some errors :
.
.
.
x ports/net/daemonlogger/Portfile
x ports/net/dibbler/
x ports/net/dibbler/Portfile
x ports/net/dibbler/files/
x ports/net/dibbler/files/0-enable-prefix.patch
x ports/net/dibbler/files/1-correct-man-pages.patch
x ports/PortIndex_darwin_11_i386/
x ports/PortIndex_darwin_11_i386/PortIndex.quick: gzip decompression failed
tar: Error exit delayed from previous errors.
Command failed: cd /opt/local/var/macports/sources/localhost/ports/.. && /usr/bin/tar -v -z -xf ports.tar.gz
Exit code: 1
Error: Extracting http://localhost/ports.tar.gz failed (command execution failed)
port sync failed: Synchronization of 1 source(s) failed
Now, I still cannot install any ports, and if I revert that default link in /opt/local/etc/macports/sources.conf to its original RSYNC one, everything returns to the way it was ( all errors, all messages, etc ... )
If I don't revert and go on using the file I have put on my localhost ( or using file:// to address the file directly ) , here's what happens when I try to install a port ( for example, Using sudo port install coreutils ) :
Port extract failed: ports/PortIndex_darwin_11_i386/PortIndex.quick: gzip decompression failed
tar: Error exit delayed from previous errors.
while executing
"macports::fetch_port $path 1"
(procedure "macports::getportdir" line 12)
invoked from within
"macports::getportdir $source"
(procedure "macports::getindex" line 4)
invoked from within
"macports::getindex $source"
(procedure "_mports_load_quickindex" line 11)
invoked from within
"_mports_load_quickindex"
(procedure "mportinit" line 577)
invoked from within
"mportinit ui_options global_options global_variations"
Error: /opt/local/bin/port: Failed to initialize MacPorts, Port extract failed: ports/PortIndex_darwin_11_i386/PortIndex.quick: gzip decompression failed
tar: Error exit delayed from previous errors.
I have Googled and read almost every solution that is suggested but NONE has worked out and I'm really stuck with this :(
Any NEW solution is really appreciated.
No replies, and I Found the solution myself!
The only way to redirect RSYNC requests through a proxy server is to tunnel over an L2TP VPN connection ( not PPTP ). That's the only way to make Macports work behind a proxy server.
Hope this can help other guys who are stuck with this weird connection method.
Instead of the main MacPorts mirror (which is sponsored by MacOSForge, which is run by Apple, which is thus bound to US law and export restrictions to Iran), you can use an alternate rsync mirror from the list at http://trac.macports.org/wiki/Mirrors.
If none of the rsync mirrors work for you, also read the FAQ entry for this very question: http://trac.macports.org/wiki/FAQ#selfupdatefails.

Resources