Maybe it's because I'm new to shared environments where I have no root access or the dpkg/apt family of tools, but I wanted to install from source (for instance, gcc/gdb), possibly by using wget to grab the tarball, unpack it, and point configure --prefix=$HOME, before calling make; make install, but I'm having some issues. Namely, the whitelist (obvious), and secondly the configure step is giving me trouble.
Can someone walk me through this process? Pythonanywhere comes with make, so it's not as if they don't want you doing this.
EDIT
Perhaps gcc/gdb may not be the best example -- together they are close to half the 500MB allotment for free accounts.
Any pure python modules will install with ease. Unfortunately you can't install modules that require a compiler. The Python Anywhere staff is generally very accommodating to get packages requested installed to the battery's included for all to enjoy.
Feel free to make a request to the PA forum
or
Email the staff: support#pythonanywhere.com
For clarity. To install a pure python module you just use
pip-3.2 install --user <package_name>
Change 3.2 as needed for the Python version you want and of course change <package_name> to your desired package.
Related
How would I install things on Pepper, since I don't know what package manager it uses. I usually use apt on my Ubuntu machine and want to install some packages on Pepper. I'm not sure what package manager Pepper has (if any) and want to install some packages, but also only know the name of the package using apt (not sure if the package name is the same on other package managers). And if possible, would I be able to install apt on Pepper. Thanks.
Note: From the research I've done, Pepper is using NaoQi which is based off Gentoo which uses portage.
You don't have root access on Pepper, which limits what you can install (and apt isn't on the robot anyway).
Some possibilities:
Include your content in Choregraphe projects - when you install a package, the whole directory structure is installed (more exactly, what's listed in the .pml); so you can put arbitrary files on your robot, and you can usually include whatever dependencies your code needs.
Install python packages with pip.
In NAOqi 2.5, a slightly older version of pip is installed that will not always work out of the box; I recommend upgrading it:
pip install --user --upgrade pip
... you can then use the upgraded pip to install other packages, using the upgraded pip, and always --user:
/home/nao/.local/bin/pip install --user whatever-package-you-need
Note however that if you do this and use your packages in your code running on Pepper, that code won't work on other robots until you do pip on them, which is why I usually only do this for tests; for production code I prefer packaging all dependencies in my app's package.
As a workaround if you need to install software (or just newer versions of software) using Gentoo Prefix is an option.
Gentoo Prefix builds a Gentoo OS on any location (no need of root, can be any folder). It includes it's own portage (package manager) to install new software.
I maintain a few projects to work with Pepper and use "any" software I want. Note that they are built for 64b (amd64) and 32b (x86) even though for Pepper only the 32b matter.
gentoo_prefix_ci and gentoo_prefix_ci_32b Which builds nightly the bootstrap of the Gentoo Prefix system. This is a process that takes a while to compile (3-6h depending on your machine) and that breaks from time to time (as upstream packages are updated and bugs are found, Gentoo is a rolling release distribution). Every night updated binary images ready to use can be found in the Releases section.
For ROS users that want to run it on the robot, based on the previous work, I maintain also ros_overlay_on_gentoo_prefix and ros_overlay_on_gentoo_prefix_32b. They provide nightly builds with binary releases of ROS Kinetic and ROS Melodic over Gentoo Prefix using ros-overlay. You can find ready-to-use 'ros_base' and 'desktop' releases.
For purposes related to the RoboCup#Home Social Standard Platform League where the Pepper robot is used I also maintain a specific build that contains a lot of additional software. This project is called pepper_os and it builds 270+ ROS packages, a lot of Python packages (250+ including Theano, dlib, Tensorflow, numpy...) and all the necessary dependencies for these to build (750+ packages). Note that the base image (it's built with Docker) is the actual Pepper 2.5.5.5 image, so it can be used for debugging as it if it was in the real robot (although without sensors and such).
Maybe this approach, or these projects, are useful.
The package manager on pepper is disabled. But you can copy the files to the robot and write your own service that imports any package you might need.
As a supplement on importing:
http://www.about-robots.com/how-to-import-python-files-in-your-pepper-apps.html
To get rid of error :
" SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
".
If you use python and requests package, just add verify=False at the end of your parameters.
r=requests.get(URL,params,header,verify=False)
Works with my Pepper
To get rid of
InsecurePlatformWarning: A true SSLContext object is not available.
install
/home/nao/.local/bin/pip install --user requests[security]
To get rid of:
CryptographyDeprecationWarning: Support for your Python version is deprecated.
install
/home/nao/.local/bin/pip install --user cryptography==2.2.2
If it based on Gentoo maybe we could try to install portage with pip.
pip install portage
Just a thought.
I come from a Python and JavaScript background.
When developing a JavaScript project, dependencies are installed in a node_modules directory in the project root.
When developing Python project, typically virtualenvwrapper is used. In this case dependencies are installed in a virtual environment, which is located in ~/.virtualenvs/<project_name> by default.
Now I need to use a ruby tool for a project. The tool that appears to be the most promising for a similar setup as described above, is bundler.
However, the default installation location for bundler is system-wide. I consider this to be harmful.
For one of my systems, it will prompt for a password, at which point I can still abort.
However, for my other system I can write into the global ruby installation. I'm using a homebrew installed ruby here. Bundle will just install dependencies globally.
I know I can specify the installation location by adding --path, but this is easy to forget.
One way to enforce an installation path is by committing .bundle/config. It would just have to contain this:
---
BUNDLE_PATH: "."
However, some googling around shows that it's not adviced to commit this file.
What is the recommended way to prevent accidental global installations using bundler?
Who's to say it will be accidental? It really depends on what context you're talking about here. I have my Ruby set up so that bundle install works without requiring sudo, it's all done through rbenv automatically. The same is true with rvm if done as a user-level install.
When it comes to deploying apps and you want to make sure it's deployed correctly, that's where tools like Capistrano come into play: Create a deployment script that will apply the correct procedure every time.
Checking in a .bundle/config is really rude from a dev perspective, just like checking in any other user-specific preferences you might have. It causes no end of conflict with other team members.
I'm trying to setup puppet master. I installed puppetlabs in my CentOS 7 box using:
$ sudo rpm -ivh https://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm.
Now, when I try to give sudo yum install puppet-server, it installs puppet-server 3.8.6-1 and also puppet 3.8.6-1.
But the documentation asks to do sudo yum install puppetserver (Notice that the hyphen is missing before server). When I run this, it installs puppetserver 1.1.3-1 and puppet 3.8.6-1.
My question is - what is the difference between puppet-server and puppetserver. Some documentations ask to use puppet-server for e.g this. Which one should I be using?
Please see these screenshots for more info:
Thanks.
The package name similarity is unfortunate, as these are completely different packages that provide similar functionality.
The puppet-server package is for a machine to run the original, Ruby / Rack puppetmaster service. There's not a lot to this, as most of the necessaries are built in to the main puppet package. It includes an internal Webrick server and can therefore run standalone, but is more often run in a Rack stack, such as Apache / Passenger, for better capacity and scalability.
The puppetserver package is for a machine to run the new, Java-based 'puppetserver' service endpoint for serving catalogs. It still relies on the Ruby catalog builder underneath; only the client service piece is shifted to Java.
You can use either one, but not both. puppet-server has the advantage of not requiring a Java stack underneath. puppetserver performs better, but only with respect to the actual client service bits. Catalog building is often the real bottleneck, and puppetserver relies on the same infrastructure as puppet-server for that.
puppetserver is the right one to install. See here,
https://docs.puppet.com/puppetserver/2.3/install_from_packages.html
Also, if you want to know what they should be doing. This will make it clear which package contains the right stuff.
https://docs.puppet.com/puppetserver/2.3/services_master_puppetserver.html
puppet-server seems to be setting up example environments. Most likely a dummy package.
]$ rpm -qlp http://yum.puppetlabs.com/el/7/products/x86_64/puppet-server-3.8.6-1.el7.noarch.rpm
/etc/puppet/environments
/etc/puppet/environments/example_env
/etc/puppet/environments/example_env/README.environment
/etc/puppet/environments/example_env/manifests
/etc/puppet/environments/example_env/modules
/etc/puppet/fileserver.conf
/etc/puppet/manifests
/usr/lib/systemd/system/puppetmaster.service
/usr/share/man/man8/puppet-ca.8.gz
/usr/share/man/man8/puppet-master.8.gz
comapred to puppetserver
]$ rpm -qlp http://yum.puppetlabs.com/el/7/products/x86_64/puppetserver-1.1.3-1.el7.noarch.rpm
/etc/logrotate.d/puppetserver
/etc/puppetserver
/etc/puppetserver/bootstrap.cfg
/etc/puppetserver/conf.d
/etc/puppetserver/conf.d/ca.conf
/etc/puppetserver/conf.d/global.conf
/etc/puppetserver/conf.d/os-settings.conf
/etc/puppetserver/conf.d/puppetserver.conf
/etc/puppetserver/conf.d/web-routes.conf
/etc/puppetserver/conf.d/webserver.conf
/etc/puppetserver/logback.xml
/etc/puppetserver/request-logging.xml
/etc/sysconfig/puppetserver
/usr/bin/puppetserver
/usr/lib/systemd/system/puppetserver.service
/usr/share/puppetserver
/usr/share/puppetserver/cli
/usr/share/puppetserver/cli/apps
/usr/share/puppetserver/cli/apps/foreground
/usr/share/puppetserver/cli/apps/gem
/usr/share/puppetserver/cli/apps/irb
/usr/share/puppetserver/cli/apps/ruby
/usr/share/puppetserver/ezbake-functions.sh
/usr/share/puppetserver/ezbake.manifest
/usr/share/puppetserver/puppet-server-release.jar
/usr/share/puppetserver/scripts
/usr/share/puppetserver/scripts/install.sh
/var/log/puppetserver
/var/run/puppetserver
How can I put my Go binary into a Debian package? Since Go is statically linked, I just have a single executable--I don't need a lot of complicated project metadata information. Is there a simple way to package the executable and resource files without going through the trauma of debuild?
I've looked all over for existing questions; however, all of my research turns up questions/answers about a .deb file containing the golang development environment (i.e., what you would get if you do sudo apt-get install golang-go).
Well. I think the only "trauma" of debuild is that it runs lintian after building the package, and it's lintian who tries to spot problems with your package.
So there are two ways to combat the situation:
Do not use debuild: this tool merely calls dpkg-buildpackage which really does the necessary powerlifting. The usual call to build a binary package is dpkg-buildpackage -us -uc -b. You still might call debuild for other purposes, like debuild clean for instance.
Add the so-called "lintian override" which can be used to make lintian turn a blind eye to selected problems with your package which, you insist, are not problems.
Both approaches imply that you do not attempt to build your application by the packaging tools but rather treat it as a blob which is just wrapped to a package. This would require slightly abstraining from the normal way debian/rules work (to not attempt to build anything).
Another solution which might be possible (and is really way more Debian-ish) is to try to use gcc-go (plus gold for linking): since it's a GCC front-end, this tool produces a dynamically-linked application (which links against libgo or something like this). I, personally, have no experience with it yet, and would only consider using it if you intend to try to push your package into the Debian proper.
Regarding the general question of packaging Go programs for Debian, you might find the following resources useful:
This thread started on go-nuts by one of Go for Debian packagers.
In particular, the first post in that thread links to this discussion on debian-devel.
The second thread on debian-devel regarding that same problem (it's a logical continuation of the former thread).
Update on 2015-10-15.
(Since this post appears to still be searched and found and studied by people I've decided to update it to better reflec the current state of affairs.)
Since then the situation with packaging Go apps and packages got improved dramatically, and it's possible to build a Debian package using "classic" Go (the so-called gc suite originating from Google) rather than gcc-go.
And there exist a good infrastructure for packages as well.
The key tool to use when debianizing a Go program now is dh-golang described here.
I've just been looking into this myself, and I'm basically there.
Synopsis
By 'borrowing' from the 'package' branch from one of Canonical's existing Go projects, you can build your package with dpkg-buildpackage.
install dependencies and grab a 'package' branch from another repo.
# I think this list of packages is enough. May need dpkg-dev aswell.
sudo apt-get install bzr debhelper build-essential golang-go
bzr branch lp:~niemeyer/cobzr/package mypackage-build
cd mypackage-build
Edit the metadata.
edit debian/control file (name, version, source). You may need to change the golang-stable dependency to golang-go.
The debian/control file is the manifest. Note the 'build dependencies' (Build-Depends: debhelper (>= 7.0.50~), golang-stable) and the 3 architectures. Using Ubuntu (without the gophers ppa), I had to change golang-stable to golang-go.
edit debian/rules file (put your package name in place of cobzr).
The debian/rules file is basically a 'make' file, and it shows how the package is built. In this case they are relying heavily on debhelper. Here they set up GOPATH, and invoke 'go install'.
Here's the magic 'go install' line:
cd $(GOPATH)/src && find * -name '*.go' -exec dirname {} \; | xargs -n1 go install
Also update the copyright file, readme, licence, etc.
Put your source inside the src folder. e.g.
git clone https://github.com/yourgithubusername/yourpackagename src/github.com/yourgithubusername/yourpackagename
or e.g.2
cp .../yourpackage/ src/
build the package
# -us -uc skips package signing.
dpkg-buildpackage -us -uc
This should produce a binary .deb file for your architecture, plus the 'source deb' (.tgz) and the source deb description file (.dsc).
More details
So, I realised that Canonical (the Ubuntu people) are using Go, and building .deb packages for some of their Go projects. Ubuntu is based on Debian, so for the most part the same approach should apply to both distributions (dependency names may vary slightly).
You'll find a few Go-based packages in Ubuntu's Launchpad repositories. So far I've found cobzr (git-style branching for bzr) and juju-core (a devops project, being ported from Python).
Both of these projects have both a 'trunk' and a 'package' branch, and you can see the debian/ folder inside the package branch. The 2 most important files here are debian/control and debian/rules - I have linked to 'browse source'.
Finally
Something I haven't covered is cross-compiling your package (to the other 2 architectures of the 3, 386/arm/amd64). Cross-compiling isn't too tricky in go (you need to build the toolchain for each target platform, and then set some ENV vars during 'go build'), and I've been working on a cross-compiler utility myself. Eventually I'll hopefully add .deb support into my utility, but first I need to crystallize this task.
Good luck. If you make any progress then please update my answer or add a comment. Thanks
Building deb or rpm packages from Go Applications is also very easy with fpm.
Grab it from rubygems:
gem install fpm
After building you binary, e.g. foobar, you can package it like this:
fpm -s dir -t deb -n foobar -v 0.0.1 foobar=/usr/bin/
fpm supports all sorts of advanced packaging options.
There is an official Debian policy document describing the packaging procedure for Go: https://go-team.pages.debian.net/packaging.html
For libraries: Use dh-make-golang to create a package skeleton. Name your package with a name derived from import path, with a -dev suffix, e.g. golang-github-lib-pq-dev. Specify the dependencies ont Depends: line. (These are source dependencies for building, not binary dependencies for running, since Go statically links all source.)
Installing the library package will install its source code to /usr/share/golang/src (possibly, the compiled libraries could go into .../pkg). Building depending Go packages will use the artifacts from those system-wide locations.
For executables: Use dh-golang to create the package. Specify dependencies in Build-Depends: line (see above regarding packaging the dependencies).
I recently discovered https://packager.io/ - I'm quite happy with what they're doing. Maybe open up one of the packages to see what they're doing?
i'm having loads of problem in trying to install CPAN Modules. Using the cpan.exe, I try to install a module with, for example, "install Win32::IE::Mecahnize" but I end up hitting a wall. In the beginning it find dmake.EXE and is okay, but when the install finishes it says dmake.exe is NOT OK, and Dmake.exe Error code 255 , while making 'test-dynamic'.
I'm very confused as to what is happening and why its not working?? Help is much appreciated.
This is the current error I get:
dmake.EXE: Error code 255, while making 'test_dynamic'
C:\strawberry\c\bin\dmake.EXE test -- NOT OK
Running make install
make test had returned bad status, won't install without force
ABELTJE/Win32-IE-Mechanize-0.009.tar.gz : make_text NO
It's not your fault. That module doesn't work for anyone. When you run into a failure with a module, investigate it to see if other people are having problems. You can look on its CPAN Search page to see that there are no passing testers reports for that distribution. That distribution should not install without force. We cover some of this in Effective Perl Programming's section on researching modules.
Can you install other modules without a problem?
ya, well here's the thing, I have both ActivePerl and Strawberry Perl installed, is that a problem?
Shouldn't be an issue. However, you need to make sure that Strawberry Perl's distro contains the binaries and libraries you need to build and install non-text only modules. You need to make sure you're using Strawberry Perl's stuff and not ActivePerl. I recommend putting Strawberry Perl in your PATH, but not ActivePerl.
Easiest thing to do is not use CPAN, but ActivePerl's own PPM manager. ActivePerl has probably prebuilt about 90% of the CPAN modules and have them available via their PPM manager. Try that.