I need to build a RPM package on my Centos6 EC2 instance, so I think it'll be best to use the "official" specs from amzn. Usually I did that with yumdownloader --source xxx but on the EC2 instance it cannot find any.
I checked /etc/yum.repo.d, which seems not to have any repo regarding src.
You can use the get_reference_source python script as described by Shadow Lau, but that needs the package being installed. And you need to run it on EC2 on an Amazon Linux AWS instance.
The script gets the URL to download from alami-source-request.amazonaws.com. Here is how you can use it:
https://alami-source-request.amazonaws.com/cgi-bin/source_request.cgi?instance_id=i®ion=eu-west-1&version=2011-08-0&srpm_name=stunnel-4.29-3.6.amzn1.src.rpm
Unfortunately you need to know the exact package name. The version is as it's in the get_reference_source script. And it seems there is no validation done on instance_id.
The above URL will return another URL with an access key, where you can download the SRPM for a limited time. After that you have to generate another URL with the above source_request.cgi.
Look for Accessing Source Packages for Reference in
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/AmazonLinuxAMIBasics.htm
Related
Desired behavior
I would like to use CYPRESS_DOWNLOAD_MIRROR pointing toward my artifactory configuration for Cypress and just be able to do npm install and download library AND binary of Cypress
Current behavior
When setting Cypress in artifactory, and downloading it with CYPRESS_DOWNLOAD_MIRROR set toward this tool, the download script find binary files as X.Y.Z and not cypress.zip and fail. Apparently I can't rename binaries in artifactory. It seems to have made it available in a name X.Y.Z instead of cypress.zip
My artifactory admin tell me to do this command before but I can't since it's a post action of Cypress
curl –v « https://artifactory.mycompany.fr/artifactory/remote-download.cypress-generic/desktop/6.8.0?platform=win32&arch=x64 » > cypress.zip
Workaround
For now, I'm using CYPRESS_INSTALL_BINARY to point to a manually uploaded binary in artifactory but it's a pain because I have to separate Linux binary (for CI) and Windows binary (for dev) and if my package is configured with "cypress": "^6.2.1" the npm library will go to 6.2.1 and my binary will stuck to 6.2.0 for example...
Debug logs
Installing Cypress (version: 6.8.0)
× Downloading Cypress
→ Cypress Version: 6.8.0
Unzipping Cypress
Finishing Installation
The Cypress App could not be downloaded.
Does your workplace require a proxy to be used to access the Internet? If so, you must configure the HTTP_PROXY environment variable before downloading
Cypress. Read more: https://on.cypress.io/proxy-configuration
Otherwise, please check network connectivity and try again:
URL: https://artifactory.mycompany.fr/artifactory/remote-download.cypress.io/desktop/6.8.0?platform=win32&arch=x64
Error: self signed certificate in certificate chain
Download method
npm
Operating System
Linux
Windows
Other
I'm behind a proxy
I don't really know if it's an artifactory or a Cypress matter but I need help ^^
In addition to accepted answer, it is possible to replace pre-defined 'Query Params' with enabling 'Propagate Query Params'. If set, the query params passed with the request to Artifactory, will be passed on to the remote repo.
Please note, according to JFrog docs, this setting is only available for Generic type repositories.
I was able to make it work on Windows using the following:
I created a generic remote repository, making sure it is pointing to https://download.cypress.io, and under the advanced tab, added the query params: platform=win32&arch=x64 (notice there is a dedicated field for it).
The above is required in order to cache the correct binary based on the OS and arch (you might require a different remote repository with different query params).
I found it on Cypress doc that these query params control the binary type which will be downloaded (so we need to make sure it fits the client os and arch).
In the .npmrc I simply provided the following:
CYPRESS_DOWNLOAD_MIRROR=https://user:myverystrongpassword#myartifactory/artifactory/generic-cypress-windows
I've used this command (on MacOS) to directly pass path to downloaded Cypress.zip file
CYPRESS_INSTALL_BINARY=~/Downloads/cypress.zip yarn add cypress --D
I know that you can setup proxy in Ansible to provision behind corporate network:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html
like this:
environment:
http_proxy: http://proxy.example.com:8080
Unfortunately in my case there is no access to internet from the server at all. Downloading roles locally and putting them under /roles folder seems solve the role issue, but roles still download packages from the internet when using:
package:
name: package-name
state: present
I guess there is no way to make dry/pre run so Ansible downloads all the packages, then push that into repo and run Ansible provision using locally downloaded packages?
This isn't really a question about Ansible, as all Ansible is doing is running the relevant package management system on the target host (i.e. yum, dnf or apt or whatever). So it is a question of what solution the specific package management tool provides, for this case.
There are a variety of solutions and for example in the Centos/RHEL world you can:
Create a basic mirror
Install a complete enterprise management system
There is another class of tool generally called an artefact repository. These started out life as tools to store binaries built from code, but have added a bunch of features to act as a proxy and cache packages from a wide variety of sources (OS Packages, PIP, NodeJS, Docker, etc). Two examples that have limited free offerings:
Nexus
Artifactory
They of course still need to collect those packages from a source, so at some point those are going to have to be downloaded to placed within these systems.
Like clockworknet pointed out this is more related to the RHEL package handling. Setting up local mirror somewhere inside the closed network can provide a solution in this situation. More info on "How to create a local mirror of the latest update for Red Hat Enterprise Linux 5, 6, 7 without using Satellite server?": https://access.redhat.com/solutions/23016
My solution:
Install Sonatype Nexus 3
create one or more yum proxy repositories
https://help.sonatype.com/repomanager3/formats/yum-repositories
use Ansible to add these proxies via yum_repository
https://docs.ansible.com/ansible/latest/modules/yum_repository_module.html
yum_repository:
name: proxy-repo
description: internal proxy repo
baseurl: https://your-nexus.server/url-to-repo```
note: did that for APT and works fine, would expect the same for yum
I am creating a Jmeter docker file. I have my JMX file and csv files checked in to git. Could you please guide me on the command to create the jmx image.
There are at least 2 ways of doing this:
Install git client (the steps are different depending on Linux distribution you're using in Docker) and perform git clone of the repository
Use Docker COPY instruction to copy the previously cloned .jmx and csv files from the host machine
Going forward I would recommend updating the question with your Dockerfile so we could get idea regarding your approach and underlying image(s) - this way we won't have to do "blind shots" and the chance you will get the answer will be much higher.
In the meantime check out Make Use of Docker with JMeter - Learn How article, you can use it (at least partially) as the reference for building your own setup.
How can I download files from Artifactory . Is it possible to download using batch script . I used CURL commands to upload then on the same way please provide suggestions to download. Appreciate your help.
You can use the JFrog CLI - a compact and smart client that provides a simple interface that automates access to JFrog products. The CLI works on both Windows and Linux.
For downloading files, take a look at the command for downloading files from Artifactory. This command allows you downloading specific files, multiple files (using wildcards) or complete folders,
Use GNU WGET from here - http://gnuwin32.sourceforge.net/packages/wget.htm
Very small utillity and supports download percentage and alot of other options like overwriting, not download if file exists etc.
Hi I used the same CURL command with Ansible .But I missed to configure the remote server for Ansible .So the CURL was not working . After configuring the remote server. It was able to download Thanks a lot for the response
I have a requirement where I want to deploy a jar file at a particular location on all hadoop cluster nodes using Ambari server. For that purpose I think I can use service feature.
So I created a sample service and could deploy it as client or slave on all nodes.
I added a new folder as Testservice inside /var/lib/ambari-server/resources/stacks/HDP/2.2/services/ and it has following files/directories
[machine]# cd /var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice^C
[machine]#
[machine]# pwd
/var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice
[machine]# ls
configuration metainfo.xml package
[machine]# ls package/*
package/archive.zip
package/files:
filesmaster.py test1.jar
package/scripts:
test_client.py
[machine]#
With this my service is added and installed on all nodes. On each node, a respective directory "/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/Testservice" is created with same file structure as mentioned above. As of now test_client.py script has no code at all. Just dummy implementation of install, configure function.
So here I want to add code such that package/files/test1.jar from each host to a defined destination location say "/lib folder.
I need help on this point. How I can make use test_client.py script? How I can write generic code to copy my jar file.
test_client.py has install method as shown below
class TestClient(Script):
def install(self, env):
Need more details how env variable can be used to get all required base paths for ambari service directory and hadoop install base paths.
You are correct in thinking that you can use a Custom Ambari Service to ensure a file is present on various nodes in your cluster. Your custom service should have a CLIENT component which handles laying down the files you need on various hosts in the cluster. It should be a client component because it has no running processes.
However, using the files folder is not the correct approach to distribute the file you have (test1.jar). All the Ambari services rely on linux packages to install the necessary files on the system. So what you should be doing is creating a software package that takes care of laying down that lib file to the correct location on disk. This could be an rpm and/or deb file depending on what OSs you are planning to support. Once you have the software package you can accomplish your goal by modifying two files you already have outlined above.
metainfo.xml - You will list the necessary software packages required for your service to function correctly. For example if you were planning on supporting RHEL6 and RHEL7 you would create an rpm package named my_package_name and include it with this code:
<osSpecifics>
<osSpecific>
<osFamily>redhat6,redhat7</osFamily>
<packages>
<package>
<name>my_package_name</name>
</package>
</packages>
</osSpecific>
</osSpecifics>
test-client.py - You will need to replace the starter code you have in your question with:
class TestClient(Script):
def install(self, env):
self.install_packages(env)
The self.install_packages(env) call will ensure that the packages you have listed in metainfo.xml file get installed when your custom service CLIENT component is installed.
Note: Your software package (rpm, deb, etc.) will have to be hosted in an online repository in order for Ambari to access it and install it. You could create a local repository on the node running Ambari Server using httpd and createrepo. This process can be gleaned from the HDP Documentation.
Alternative approach (Not Recommended)
Now that I have explained the way it SHOULD be done. Let me tell you how you can achieve this using the package/files folder. Again this is not the recommended approach to handle installing software on a linux system, the package management system for your distribution should be handling this.
test-client.py - Update your starter file to include the below content. For this example we will copy your test1.jar to /lib folder with file permissions 0664, owner of 'guest', and group 'hadoop':
def configure(self,env):
File("/lib/test1.jar",
mode=0644,
group="hadoop",
owner="guest",
content=StaticFile("test1.jar")
)
Why is this approach not recommended? This is not recommended because installing software on a linux distribution should be managed so that it makes it easy to upgrade and remove said software. Ambari does not have full uninstall functionality when it comes to its services. The most you can do is remove a service from being managed in your ambari cluster, after doing so all those files will remain on the system and would have to be removed by writing a custom script or doing it manually. However if you used package managment to handle installing the files you could easily remove the software by using the same package management system.