terraform wont download plugin without sudo - macos

I am trying to run the terraform code and running the terraform init but I am running into issues.
As you can see, when I run with sudo, it has no issues, but without it, it has. I am using mac os Mojave terraform 0.12. I checked the folder permissions and it is just fine.
once I run sudo terraform init, the other commands don't need the sudo command.
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
Registry service unreachable.
This may indicate a network issue, or an issue with the requested Terraform Registry.
Registry service unreachable.
This may indicate a network issue, or an issue with the requested Terraform Registry.
Error: registry service is unreachable, check https://status.hashicorp.com/ for status updates
Error: registry service is unreachable, check https://status.hashicorp.com/ for status updates
C02Z1BCSLVCG:blue-deployment shakyas$ sudo terraform init
Password:
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.42.0...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 2.42"
* provider.template: version = "~> 2.1"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

I had the same issue, and I resolved it with removing a lot of certificates in my macOS Keychain.
It sound weird and I still don't understand why it works, but it works for others people too : https://discuss.hashicorp.com/t/error-when-running-terraform-init/3135/6

Related

DataHub installation on Minikube failing: "no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"" on elasticsearch setup

Im following the deployement guide of DataHub with Kubernetes present on the documentation: https://datahubproject.io/docs/deploy/kubernetes
Settin up the local clusten with Minikube I've started following the prerequisites session of the guide.
At first I tried to change some of the default values to try it locally (I've already installed it sucessfully on Google Kubernetes Engine, so I was trying different set ups)
But on the first step of the installation I've received the error:
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "elasticsearch-master-pdb" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first
The steps I've followed after installing Minikube where the exact steps presented on the page:
helm repo add datahub https://helm.datahubproject.io/
helm install prerequisites datahub/datahub-prerequisites
With the error happening on step 2
At first I've changed to the default configuration to see if it wasnt a mistake on the new values, but the error remained.
Ive expected that after followint the exact default steps the installation would be successfull locally, just like it was on the GKE
I got help browsing the DataHub slack community and figured out a way to fix this error.
It was simply a matter of a version error with Kubernetes, I was able to fix it by forcing minikube to start with the 1.19.0 version of Kubernetes:
minikube start --kubernetes-version=v1.19.0

Terraform behind the password protected proxy command

I have set the proxy in command as following
set HTTP_PROXY=http://user:passowrd#host.com:8080
set HTTPS_PROXY=https://user:passowrd#host.com:8080
set HTTP_USER=myuser
set HTTP_PASSWORD=mypwd
and future more I have set environment variable as HTTP_PROXY, HTTPS_PROXY, HTTP_USER, HTTP_PASSWORD
Somehow still getting following error
>terraform init
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
Registry service unreachable.
This may indicate a network issue, or an issue with the requested Terraform Registry.
Error: registry service is unreachable, check https://status.hashicorp.com/ for status updates
please note that https://status.hashicorp.com/ having access behind the proxy.
but I am not sure terraform init actually which URL/service API is getting access
Working for me with proxy:
C:\Users\xxxx\Desktop\VMWare_Scripts\Terraform>set HTTP_PROXY=http://xxxx:8080
C:\Users\xxxx\Desktop\VMWare_Scripts\Terraform>terraform init
Initializing the backend...
Initializing provider plugins...
Finding latest version of hashicorp/vsphere...
Installing hashicorp/vsphere v1.24.1...
Installed hashicorp/vsphere v1.24.1 (signed by HashiCorp)
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, we recommend adding version constraints in a required_providers block
in your configuration, with the constraint strings suggested below.
hashicorp/vsphere: version = "~> 1.24.1"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

heroku command not working error Installing core plugins

I'm keep getting this error. I have installed heroku toolkit successfully
C:\Users\hp-u>heroku login
Would you like to submit Heroku CLI usage information to better improve the CLI user experience?
[y/N] Y
heroku-cli: Installing core plugins...Error reading plugin heroku-apps.
Reinstalling... Error reading plugin heroku-apps. Reinstalling ... Error reading plugin heroku-apps. Reinstalling
I removed the s from https:// in the value HTTPS_PROXY variable and it worked.
so:
export HTTPS_PROXY="http://myproxy.com:8080"
//instead of
export HTTPS_PROXY="https://myproxy.com:8080"
i.e. somehow the internal college proxy itself might not be able to perform SSL communication

How to debug Jenkins error message "could not find a suitable ssh-agent provider"?

I'm using Jenkins on Win7 and i've installed tomcat for ssh-agent plugin. And I could clone my GitLab project via git bash via ssh.
But if I build the project by Jenkins, it always says :
[ssh-agent] Using credentials IliptonChen(APRTest)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] FATAL: Could not find a suitable ssh-agent provider
FATAL:[ssh-agent] Unable to start agent
The full output text is here
Did I do anything wrong?
Check the version of your ssh-agent used by Jenkins.
This bug (for linux, but could apply to Windows too) reports (10 days ago, January 2014) this very same error message:
"JENKINS-20276: Native Library Error after upgrading ssh-agent from 1.3 to 1.4".
Downgrading to 1.3 resolves the issue.
Update 2019, five years later: as commented, this should be fixed now.
ssh-agent.exe is part of a Git for Windows distribution
D:\git\git>where ssh-agent.exe
D:\prgs\gits\current\usr\bin\ssh-agent.exe
(provided path/to/git/usr/bin is first in the %PATH% used by Jenkins)
Assuming you've installed Windows Git on Windows slave, it comes with ssh-agent binary (e.g. C:\Program Files\Git\usr\bin). Try adding its path to system variable PATH.
Otherwise untick SSH Agent and choose the credentials by selecting Credentials from dropdown in Source Code Management section.
Another way is to generate personal API token (OAuth) for that GitHub user and include that along with your repository address, e.g.
git clone https://4UTHT0KEN#github.com/foo/bar
For windows, the plugin still requires Tomcat to be installed in both master and slave.
I got this error because I was using an Ubuntu image for the agent, which doesn't have SSH installed.
agent {
docker { image 'ubuntu:focal' }
}
... so the solution was as simple as installing SSH as part of the pipeline:
steps {
sh "apt-get update && apt-get install ssh -y"
// rest of your steps here...
}
In my case, the error was accompanied by an error about disk space depletion:
[ssh-agent] FATAL: Could not find a suitable ssh-agent provider
[ssh-agent] Diagnostic report
[ssh-agent] * Exec ssh-agent (binary ssh-agent on a remote machine)
[ssh-agent] hudson.AbortException: Failed to run ssh-agent: mkdtemp: private socket dir: No space left on device
So I ran docker system prune -a, which fixed it.

How do install security updates on an Amazon Linux AMI EC2 instance?

I see the following notices displayed on login:
__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|
See /usr/share/doc/system-release/ for latest release notes.
There are 30 security update(s) out of 39 total update(s) available
How do I install these updates on my machine?
As outlined in section Security Updates within Amazon Linux AMI Basics, Amazon Linux AMIs are configured to download and install security updates at launch time, i.e. If you do not need to preserve data or customizations on your running Amazon Linux AMI instances, you can simply relaunch new instances with the latest updated Amazon Linux AMI (see section Product Life Cycle for details).
This currently includes only Critical or Important security updates though, see the AWS team's response to Best practices for Amazon Linux image security updates:
The default on Amazon Linux AMI is to install any Critical or
Important security updates on launch. This is a function of cloud-init
and be modified in cloud.cfg on the box or by passing in user-data.
This is why you see some security updates still available at launch.
Consequently, if you want to install all security updates or indeed need to preserve data or customizations on your running Amazon Linux AMI instances, you can maintain those instances through the Amazon Linux AMI yum repositories, i.e. you need to facilitate the regular Yum update mechanism as outlined for the yum-security plugin:
# yum update --security
Please note: This does not work if only security updates are selected, due to the fact that security updates are not properly flagged in centos and amazon linux. This may be a matter of Redhat making security a paid feature which, if I'm being frank, is bullshit.
For this to work you must update the yum-cron config file to install all updates. This makes security updates less likely to run reliably which makes everyone less secure.
update_cmd = default
Amazon Linux runs updates when the host boots for the first time.
If you plan to have hosts up long-term you may also want to enable automatic security updates. I recommend using yum-cron:
sudo yum install yum-cron
The config file is here: (you probably want to just run security updates)
/etc/yum/yum-cron.conf
You can then enable yum-cron like so:
sudo service yum-cron start
edit from a useful comment below:
"If you're creating/destroying instances with an auto-scaling group, etc, the command should be something like "sudo yum update -y" in user data."
The answer above is correct, here are the 4 commands you can copy and paste to run:
# Install the package yum-cron
sudo yum install yum-cron -y
# Change the config file /etc/yum/yum-cron.conf and modify the line apply_updates from no to yes
sudo sed -i "s/apply_updates = no/apply_updates = yes/" /etc/yum/yum-cron.conf
# Enable the yum-cron service to start automatically upon system boot
sudo systemctl enable yum-cron
# Start the yum-cron service now
sudo systemctl start yum-cron
These commands also work on Red Hat 7, CentOS 7
If you are running as the root user you can simply run the commands without sudo:
yum install yum-cron -y
sed -i "s/apply_updates = no/apply_updates = yes/" /etc/yum/yum-cron.conf
systemctl enable yum-cron
systemctl start yum-cron
For more information see https://linuxize.com/post/configure-automatic-updates-with-yum-cron-on-centos-7/
https://www.howtoforge.com/tutorial/how-to-setup-automatic-security-updates-on-centos-7/

Resources