I have Jenkins running on EC2: I use for that the standard Amazon AMI
based on CentOs
I would like to setup the SLOCCOUNT plugin the same way it runs on my
dev machine (running Ubuntu) but I can't find the package in the
Amazon AWS package repository (sloccount* brings no answer)
Does anyone know if SLOCCOUNT is in the AWS repository and its name ?
Thanks in advance
didier
It's not in the AWS repository but I've had luck getting the rpm from the fedora repos.
wget http://www.mirrorservice.org/sites/download.fedora.redhat.com/pub/fedora/linux/development/rawhide/x86_64/os/Packages/s/sloccount-2.26-12.fc17.x86_64.rpm
worked for me.
Related
Dear Community I need your help for understanding things.
I want to set up an automated Ubuntu 20.04 Template with a GitLab CI/CD in our vCenter.
So I configured a gitlab-runner with an own written packer Docker-Image (https://hub.docker.com/r/docker4bscg/packer) which should execute in the gitlab ci/cd my packer configs from the gitlab project. But the Problem now is, how do I expose the Packer HTTP Server which serves my cloud-init user-data/meta-data file? Or how do I make the subiquity config accessable via CI/CD over the Packer Gitlab Runner? Because the packer runs inside the Docker container, which got his own docker network.
Or is it possible to install a Ubuntu 20.04 Server via Packer with Subiquity where the user-data/meta-data are provided via floppy drive?
I hope you guys understand my question(s) and can help me solving it.
Greetings.
I know that you can setup proxy in Ansible to provision behind corporate network:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html
like this:
environment:
http_proxy: http://proxy.example.com:8080
Unfortunately in my case there is no access to internet from the server at all. Downloading roles locally and putting them under /roles folder seems solve the role issue, but roles still download packages from the internet when using:
package:
name: package-name
state: present
I guess there is no way to make dry/pre run so Ansible downloads all the packages, then push that into repo and run Ansible provision using locally downloaded packages?
This isn't really a question about Ansible, as all Ansible is doing is running the relevant package management system on the target host (i.e. yum, dnf or apt or whatever). So it is a question of what solution the specific package management tool provides, for this case.
There are a variety of solutions and for example in the Centos/RHEL world you can:
Create a basic mirror
Install a complete enterprise management system
There is another class of tool generally called an artefact repository. These started out life as tools to store binaries built from code, but have added a bunch of features to act as a proxy and cache packages from a wide variety of sources (OS Packages, PIP, NodeJS, Docker, etc). Two examples that have limited free offerings:
Nexus
Artifactory
They of course still need to collect those packages from a source, so at some point those are going to have to be downloaded to placed within these systems.
Like clockworknet pointed out this is more related to the RHEL package handling. Setting up local mirror somewhere inside the closed network can provide a solution in this situation. More info on "How to create a local mirror of the latest update for Red Hat Enterprise Linux 5, 6, 7 without using Satellite server?": https://access.redhat.com/solutions/23016
My solution:
Install Sonatype Nexus 3
create one or more yum proxy repositories
https://help.sonatype.com/repomanager3/formats/yum-repositories
use Ansible to add these proxies via yum_repository
https://docs.ansible.com/ansible/latest/modules/yum_repository_module.html
yum_repository:
name: proxy-repo
description: internal proxy repo
baseurl: https://your-nexus.server/url-to-repo```
note: did that for APT and works fine, would expect the same for yum
I never hosted any website before may be thats why this task became so tough for me.I searched various codes for deployment but wasn't able to host my website.
i used python 3.6.4 and django 2.0.2 with mysql database for my website. It would be a great help if i get steps from scratch for deployment with my requirements.
Thanks in advance!
Below are the basic steps to host your django website on any linux based server.
1) Create requirements.txt file which will include all your pip packages.
On your local enviroment just do pip freeze. It will show you something as below. Include those package to your file.
Django==1.11.15
pkg-resources==0.0.0
pytz==2018.5
2) Create virtual env on your ec2 amazon instance. You can follow same step give on below website.
https://docs.python-guide.org/dev/virtualenvs/
3) Install you local packages to this virtual env.
4) If you have mysql as backend you can install mysql with below command
sudo apt-get install mysql*
Or you can use RDS (Amazon Relational Database Service)
5) Check if you django is able to connect to mysql using below command
python manage.py check
6) If above command work without error, You need to install two things.
1) Application server
2) Web server
7) You can use any Application server like uwsgi, gunicorn
https://uwsgi-docs.readthedocs.io/en/latest/
https://gunicorn.org/
8) Web server will be nginx
https://nginx.org/en/
9) For you static file you will need Bucket. You need to create bucket and host you static files their.
You can find help online to achieve above steps.
I have a very interesting problem. Following is my current workflow of deployment in Amazon EC2 in classic mode.
Deploy host inside my Company's network.
Deploy Target is EC2 machine in AWS.
Have custom ruby gems inside the company's git account (Hence cannot install gems from outside my companies network).
To overcome the problem mentioned in Point #3. I have used reverse tunnelling between the deploy host and deploy target.
I am using capistrano for deployment.
Now the problem arises when we decided to move from Amazon Classic to Amazon VPC with deploy target having only private ip address. Here is the workflow I thought of for deploying code in VPC instances.
Create a deploy host in Amazon VPC and attach public dns to it so that I can access it from my main deploy host (which is inside my company's network.)
Deploy the code by running the deployment scripts from AWS deploy host.
The problem is that I am not able to find a way to install gems which are hosted inside the git account of my company. Can you guys help me with this problem?
Prior to deployment, you can just setup git mirrors of your production repositories by just pushing to git bare repositories in your AWS deploy host.
Then that AWS deploy host also has access to your VPC so you can do the deployment from there.
Hope it helps.
Download the gems first and then pass it to the ec2 instance in vpc using scp
scp -r -i key ubuntu#ip-address:/ruby-app
Then run gem install gem-name from the folder, it will install gem from within the folder matching with the name.
Run bundle package, this will download all the gems and will be present in vendor/cache folder. Now move this files to the ec2 instance.
I need to build a RPM package on my Centos6 EC2 instance, so I think it'll be best to use the "official" specs from amzn. Usually I did that with yumdownloader --source xxx but on the EC2 instance it cannot find any.
I checked /etc/yum.repo.d, which seems not to have any repo regarding src.
You can use the get_reference_source python script as described by Shadow Lau, but that needs the package being installed. And you need to run it on EC2 on an Amazon Linux AWS instance.
The script gets the URL to download from alami-source-request.amazonaws.com. Here is how you can use it:
https://alami-source-request.amazonaws.com/cgi-bin/source_request.cgi?instance_id=i®ion=eu-west-1&version=2011-08-0&srpm_name=stunnel-4.29-3.6.amzn1.src.rpm
Unfortunately you need to know the exact package name. The version is as it's in the get_reference_source script. And it seems there is no validation done on instance_id.
The above URL will return another URL with an access key, where you can download the SRPM for a limited time. After that you have to generate another URL with the above source_request.cgi.
Look for Accessing Source Packages for Reference in
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/AmazonLinuxAMIBasics.htm