The selfupdate command seems to be incredibly slow in MacPorts. It seems like it is taking ages in this step
---> Updating the ports tree
Synchronizing local ports tree from http://distfiles.macports.org/ports.tar.gz
I believe it is taking a long time just to download the ports.tar.gz file (order of 9-22 kbps). I've downloaded it myself (using axel downloader 100-300 kbps). How can I associate this along with selfupdate so that I can operate it off-line; at least for the ports.tar.gz file? Is this even possible?
Nailed it! Got the solution for the problem. All I had to do is to include the path in the /opt/local/etc/macports/sources.conf file in place of the old one just like this
#rsync://rsync.macports.org/release/tarballs/ports.tar [default]
#https://www.macports.org/files/ports.tar.gz [default]
file:////Users/Ebe/Downloads/Axel/ports.tar.gz
I commented out the remaining entry just in case. (Ref. the second line in the conf file was from this post)
I then executed sudo port selfupdate and the process was complete without any delay.
Related
I want to download GPM data from GES DISK provided by NASA. I used wget and the URLs notepad file received as the described steps in https://disc.gsfc.nasa.gov/data-access. I have no problem with the downloading process in Command Prompt, but after downloading I found that only odd days have been downloaded while the URLs in my notepad file have all days of my desired period. How can I resolve this problem?
I would like to verify that an rpm is available from Nexus 3 after it is uploaded.
When an rpm is uploaded to Nexus 3, the following events happen (looking at the logs):
Scheduling rebuild of yum metadata to start in 60 seconds
Rebuilding yum metadata for repository rpm
...
Finished rebuilding yum metadata for repository rpm
This takes a while. In my CI pipeline I would like to check periodically until the artifact is available to be installed.
The pipeline builds the rpm, it uploads it to Nexus 3 and then checks every 10 seconds whether the rpm is available. In order to check the availability of the rpm I'm performing the following command:
yum clean all && yum --disablerepo="*" --enablerepo="the-repo-I-care-about" list --showduplicates | grep <name_of_artifcat> | grep <expected_version>
The /etc/yum.conf contains:
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5
distroverpkg=centos-release
http_caching=none
The /etc/yum.repos.d/repo-i-care-about.repo contains:
[repo-i-care-about]
name=Repo I care about
enabled=1
gpgcheck=0
baseurl=https://somewhere.com
metadata_expire=5
mirrorlist_expire=5
http_caching=none
The problem I'm experiencing is that the list response seems to return stale information.
The metadata rebuild takes about 70 seconds (60 seconds initial can be configured, I will tweak it eventually), and I'm checking every 10 seconds: the response from the yum repo looks cached somewhere (sometime), and when it happens if I try to perform the same search on another box with the same repo settings I get the expected artefact version.
The fact that on another machine I get the expected result on the first attempt given the specific list command and the fact that the machine where I check every 10 seconds seems to never receive the expected result (even after several minutes since the artefact is available on a different box) makes me think that the response gets cached.
I would like to avoid waiting 90 seconds or so before making the first list request (to make sure that the very first time I perform the list command the artefact is most likely ready and I don't cache the result), especially because the initial delay of the scheduling of the metadata might change (from 60 seconds we might change to a lower value).
The flakyness of this check got better since I've added the http_caching=none to the yum.conf and to the repo definition. But it still didn't make the problem go away reliably.
Is there any other settings around caching that I'm supposed to configure in order to expect more reliable results from the list command? At this point I really don't care about how long the list command would take, as long as it does not contain stale information.
Looks like deleting the /var/cache/yum/* folders is making the check more reliable. Still, it feels like I'm missing some settings to achieve what I need in a neater way.
I'm trying to use wkhtmltopdf on GCF for PDF generation.
When my function tries to spawn the child process I get the following error:
Error: ./services/wkhtmltopdf: error while loading shared libraries: libXrender.so.1: cannot open shared object file: No such file or director
The problem is clearly due to the fact that wkhtmltopdf binary depends on external shared libraries which are not installed in GCF environment.
Is there a way to solve this issue or should I give up and use other solutions (AWS Lambda o GAE)?
Thank you in advance
Indeed, I’ve found a way to solve this issue by copying all required libraries in the same folder (/bin for me) containing wkhtmltopdf binary. In order to let the binary file use uploaded libraries I added the following lines to wkhtmltopdf.js:
wkhtmltopdf.command = 'LD_LIBRARY_PATH='+path.resolve(__dirname, 'bin')+' ./bin/wkhtmltopdf';
wkhtmltopdf.shell = '/bin/bash';
module.exports = wkhtmltopdf;
Everything worked fine for a while. At a sudden I receive many connection errors from GCF or timeouts but I think it’s not related to my implementation but rather to Google.
I’ve ended up setting a dedicated server.
I have managed to get it working, there are 2 things needed to be done, as wkhtmltopdf won't work if:
libXrender.so.1 can't be loaded
you are using stdout to collect resulting pdf. Wkhtmltopdf has to write the result into a file
First you need to obtain correct version of libXrender.
I have found out, which docker image Cloud functions are using as base for nodejs functions. I've ran it locally, installed libxrender and copied the library into my function's directory.
docker run -it --rm=true -v /tmp/d:/tmp/d gcr.io/google-appengine/nodejs bash
Then, inside the runing container:
apt update
apt install libxrender1
cp /usr/lib/x86_64-linux-gnu/libXrender.so.1 /tmp/d
I have put this into my function's project directory and under lib sub directory. In my function's source file, I then set-up LD_LIBRARY_PATH to include the /user_code/lib directory (/user_code is the directory, where at last your function will end up being put by google):
process.env['LD_LIBRARY_PATH'] = '/user_code/lib'
This is enough for wkhtmltopdf to be able to execute. It will fail, as it won't be able to write to stdout and the function will eventually timeout and be killed (as Matteo experienced). I think this is because google runs the containers without a tty (just speculation), I can run my code in their container, if I run it with docker run -it flags. To solve this, I am invoking wkhtmltopdf so that it writes the output into a file under /tmp (this is in-memory tmpfs). I then read the file back and send it as my response body. Note that the tmpfs might be reused between function calls, so you need to use unique file every time.
This seems to do the trick and I am able to run wkhtmltopdf as Google CloudFunction.
I'm looking for some best practices on how to increase my productivity when writing new puppet modules. My workflow looks like this right now:
vagrant up
Make changes/fixes
vagrant provision
Find mistakes/errors, GOTO 2
After I get through all the mistakes/errors I do:
vagrant destroy
vagrant up
Make sure everything is working
commit my changes
This is too slow... how can i make this workflow faster?
I am in denial about writing tests for puppet. What are my other options?
cache your apt/yum repository on your host with the vagrant-cachier plugin
use profile –evaltrace to find where you loose time on full provisioning
use package base distribution :
eg: rvm install ruby-2.0.0 vs a pre-compiled ruby package created with fpm
avoid a "wget the internet and compile" approach
this will probably make your provisioning more reproducible and speedier.
don't code modules
try reusing some from the forge/github/...
note that it can be against my previous advice
if this is an option, upgrade your puppet/ruby version
iterate and prevent full provisioning
vagrant up
vagrant provision
modify manifest/modules
vagrant provision
modify manifest/modules
vagrant provision
vagrant destroy
vagrant up
launch server-spec
minimize typed command
launch command as you modify your files
you can perhaps setup guard to launch lint/test/spec/provision as you save
you can also send notifications from guest to host machine with vagrant-notify
test without actually provisioning in vagrant
rspec puppet (ideal when refactoring modules)
test your provisioning instead of manual checking
stop vagrant ssh-ing checking if service is running or a config has a given value
launch server-spec
take a look at Beaker
delegate running the test to your preferred ci server (jenkins, travis-ci,...)
if you are a bit fustrated by puppet... take a look at ansible
easy to setup (no ruby to install/compile)
you can select portion of stuff you want to run with tags
you can share the playbooks via synched folders and run ansible in the vagrant box locally (no librairian-puppet to launch)
update : after discussion with #garethr, take a look at his last presentation about guard.
I recommand using language-puppet. It comes with a command line tool (puppetresources) that can compute catalogs on your computer and let you examine them. It has a few useful features that can't be found in Puppet :
It is really fast (6 times faster on a single catalog, something like 50 times on many catalogs)
It tracks where each resource was defined, and what was the "class stack" at that point, which is really handy when you have duplicate resources
It automatically checks that the files you refer to exist
It is stricter than Puppet (breaks on undefined variables for example)
It let you print to standard output the content of any file, which is useful for developing complex templates
The only caveat is that it only works with "modern" Puppet practices. For example, require is not implemented. It also only works on Linux.
Background:
I recently installed MAMP, and am using it as a production server. The server setup did not come with an FTP server, and from what I've read, you can set up an FTP server via mod_ftp, an Apache module. I am not an expert with Apache software or server admin, although I can learn quickly. I can get to the following point and then I get stuck. Can someone please help me out?
I checked out the mod_ftp module files from the repository, here:
http://svn.apache.org/repos/asf/httpd/mod_ftp/trunk/
and I unzipped the contents into:
/Applications/MAMP/mod_ftp
I opened the README-FTP file (here):
http://svn.apache.org/repos/asf/httpd/mod_ftp/trunk/README-FTP
README-FTP:
To build and install as a DSO outside of the httpd source
build, from the ftp source root directory, simply;
./configure.apxs
make
make install
...
To build static, or as a DSO but within the same build as httpd,
copy the entire ftp source directory tree on top of your existing
httpd source tree, and from the httpd source root directory
./buildconf (to pick up ftp)
./configure --enable-ftp {your usual options}
and proceed as usual.
Some Questions:
"build and install a DSO outside of the httpd source build, from the ftp source root directory" -- is the ftp source root directory the mod_ftp folder that I created from the zipped files I checked out from the repository?
What does it mean "outside of the httpd source build"? -- is this the ServerRoot value I set in the httpd.conf as "/Applications/MAMP/Library" ?
Likewise, what does "within the same httpd build" mean -- what location is this referring to?
How do I know whether I want a static or DSO build?
What is the statement: "copy the entire ftp source directory tree on top of your existing
httpd source tree" actually asking me to do? (on top of?? As in, in the parent directory of the httpd source tree, or in the same directory?)
If you've made it this far, I'd like to commend you!
From this point, I chose the first option, and entered the commands seen in README-FTP into my Terminal.
Here's what my terminal looks like:
$ ./configure.apxs
Configuring mod_ftp for APXS in /usr/sbin/apxs
Detecting features
Finished, run 'make' to compile mod_ftp
Run 'make FTPPORT=8021 install' to install mod_ftp
(The default FTPPORT is 21 if not specified)
The manual pages ftp/index.html and mod/mod_ftp.html
will be installed to help get you started.
The conf/extra/ftpd.conf will be installed as an example
for you to work from. In your configuration file,
/private/etc/apache2/httpd.conf
uncomment the line '#Include conf/extra/ftpd.conf'
to activate this example mod_ftp configuration.
$ make
Making all in modules/ftp
$ sudo make install
Password:
Making install in modules/ftp
/usr/share/apr-1/build-1/libtool --silent --mode=install cp mod_ftp.la /usr/libexec/apache2/
Installing configuration files
for i in /private/etc/apache2/httpd.conf /private/etc/apache2/original/httpd.conf; do \
if test -f $i; then \
(awk -f /applications/mamp/library/mod_ftp/build/addloadexample.awk \
-v MODULE=ftp -v DSO=.so -v LIBPATH=libexec/apache2 \
-v EXAMPLECONF=/private/etc/apache2/extra/ftpd.conf \
< $i > $i.new && \
mv $i $i.bak && mv $i.new $i \
) || true; \
fi; \
done
Preserving existing FTP documents
Installing header files
Installing online manual
$
So what do I do from here?
I don't see mod_ftp.so anywhere, and I am particularly looking in this directory:
/Applications/MAMP/Library/modules (where all of Apache's other mod_*.so files are...)
and this directory:
/Applications/MAMP/mod_ftp/modules/ftp (where all of mod_ftp's various .c, .h and other files are)
Ultimately, I think the problem I am running into is that I don't understand how the file structures between my mod_ftp source folder and the httpd source folders need to be integrated in order to get the module running properly. Also, I don't know what I don't know, so there is probably one simple question to ask, but unfortunately I can't figure out how to ask it. Thank you for your help and patience!
Cheers!
P.S., yes, I have scoured the internet for hours.
I ended up scrapping MAMP and using the Mac's built in server. Through the System Preferences > Sharing menu, you can enable file sharing, which has an Options pane that allows you to "Share files and folders using FTP." I was able to obtain a static IP address through Comcast Business, and configured port forwarding on port 21 in my router to accept traffic. Then, I could use my FTP client to connect to my router with something like "123.456.789:21" as my host. Wasn't the best or most secure solution, but it worked, so take it with a grain of salt.
Right, I finally managed to install this on Ubuntu LTS 16.04.
First of all, you should install svn and the apxs functionality by running
sudo apt-get install subversion apache2-dev.
Then, cd to a convenient folder, and run svn co http://svn.apache.org/repos/asf/httpd/mod_ftp/trunk/. This downloads everything in the folder named /trunk. Then, cd into the /trunk folder of the downloaded repo.
Then, run the stated instructions
./configure.apxs --> does something in the subfolder to enable makefile to work
make --> this compiles the contents of the repo and changes things around.
make install --> you may want to run with the suggested flag. Essentially copies things to where it needs to be copied, and creates the necessary modules.
The suggested ./buildconf and ./configure should only be done if you are compiling apache2 with ftp at the same time. Since you should already have apache2 installed, this is not the option that you should be doing. Just stick with the first set of instructions, which are used to compile mod_ftp independently of apache2 and patch things in as needed.
At this point, the installation should technically work. However, you are not fully out of the woods yet. If you restart apache2 at this point, it should fail to start. If you run systemctl -xe, you will see that it is due to syntax errors in various places of the config files where someone forgot to prepend a forward slash, so rather than being given relative to root, the directories being specified end up being relative to /etc/apache2 instead. Fix those, using the line numbers as a guide. The omissions may be found in the apache2.conf file that specify the location of the mod_ftp module, and in the ftpd.conf file that specify the location of the error log.
You now need to mess around with apache2.conf and ftpd.conf (found in the /extra subfolder of the /etc/apache2 folder). Make sure that the lines
include /etc/apache2/extra/ftpd.conf
LoadModule ftp_module /usr/lib/apache2/modules/mod_ftp.so are present and uncommented.
The first basically tells the main apache2.conf file to include the configuration files for mod_ftp to help with partitioning your http and ftp configuration settings. The second just makes sure that the ftp module is loaded so that it can interpret the directives in the ftpd.conf file. Thus, you won't need to add the line "FTP on" or specify ports, as those are handled in ftpd.conf perfectly well.
You should now be good to go. Just note that for some weird reason if you set the document root in ftpd.conf to be the same as that in apache2.conf, apache2 will still run normally. The ftp server will work normally but the http server will not work. No idea why, but if you want to do that a simple workaround is to just do a symlink to the http document root and set that as the ftp document root.