Timeout Problem with Ruby Fog when bootstrapping AWS servers - ruby

I've been trying for a little while now to provision a small instance on AWS with the fog library. I've been somewhat successful (in that an instance does spool up when I run this code), but I keep getting timeout errors during the SSH portion, and when I dug deeper I found that they're consistently "AuthentitcationFailed" problems.
The failing code is as follows:
require 'rubygems'
require "fog"
connection = Fog::Compute.new({
provider: "AWS",
aws_secret_access_key: SECRET_KEY,
aws_access_key_id: ACCESS_KEY
})
server = connection.servers.bootstrap({
private_key_path: "~/.ssh/id_rsa",
public_key_path: "~/.ssh/id_rsa.pub",
username: "ubuntu"
})
Much reading has told me that sometimes this is just because the instance takes too long to spool up, but this is very consistent (it happens every time I try it). Does anyone see what I'm doing wrong?

I had the same problem some days ago and actually found the problem for my case and submitted it to the Fog issue tracker.
A colleague of mine was using connection.bootstrap() with the same AWS credentials but different SSH keys. So the "fog_default" public key had already been registered and the attempt to log in with my key pair failed.
If you're experiencing similar problems, check with connection.key_pairs.get('fog_default') if fog_default was registered before.
If this is actually the case, you have three options to get around this problem:
Delete the fog_default by running: connection.key_pairs.get('fog_default').destroy and register your new public key via bootstrap()
Manually register your custom key under a custom name
Set Fog.credential to a custom name so bootstrap() uses this name instead of "default" to register your public key
Solution two looks like this:
Fog.credentials = Fog.credentials.merge({
:private_key_path => "./keys/my_custom_key",
:public_key_path => "./keys/my_custom_key.pub"
})
if connection.key_pairs.get('my_custom_key').nil?
public_key = IO.read('./keys/my_custom_key.pub')
connection.import_key_pair('my_custom_key', public_key)
end
server = connection.servers.bootstrap(
:key_name => 'my_custom_key',
...
)
Solution three, which I prefer because the only change I need to make is to set Fog.credential, looks like this:
Fog.credential = :my_custom_key
connection.servers.bootstrap(
:private_key_path => './keys/my_custom_key',
:public_key_path => './keys/my_custom_key.pub',
...
)

I'd recommend a few things to diagnose the problem (if you're still having it)
Check your security groups to make sure port 22 is open to your IP / the world (0.0.0.0/0)
Try connecting manually using SSH
If you still see issues, try
ssh -v -v <normal options>
This will give you more information on what is happening when trying to connect to the instance.

Related

Setting up GenesysGo as RPC Host on Candy Machine V2

I am finalizing my first Candy Machine minting project using Candy Machine V2. I have read that its not a good idea to use the default https://api.mainnet-beta.solana.com host because it can't handle large amounts of traffic. I've update the Candy Machine .env file to use the GeneysisGo endpoint (https://shdw.genesysgo.com/genesysgo/the-genesysgo-rpc-network), my .env file looks like this:
REACT_APP_CANDY_MACHINE_ID=<my ID>
REACT_APP_SOLANA_NETWORK=mainnet-beta
REACT_APP_SOLANA_RPC_HOST=https://ssc-dao.genesysgo.net/
SKIP_PREFLIGHT_CHECK=true
I was reading this article: https://medium.com/#elysianft/lets-put-an-end-to-bad-drops-on-solana-c8cfd6d33e69. It mentions to find the web3.clusterApiUrl(env) and change it with the updated RPC URL from GeneysisGo but I don't see that line in the asssets.ts file as the article mentions. I only see those lines in the following two files:
App.tsx: (looks like this file already takes the rpcHost if there is one).
const connection = new anchor.web3.Connection(
rpcHost ? rpcHost : anchor.web3.clusterApiUrl('devnet'),
);
cli-nft.ts:
Original:
const connection = new web3.Connection(web3.clusterApiUrl(env));
After my update:
const connection = new web3.Connection(rpcUrl || web3.clusterApiUrl(env));
My question is should I actually update this files or is the article out of date?
Any help is appreciated.
You dont have to change any code atm, that guide is really outdated!.
In order to use a custom RPC you just have to change the .env file and nothing else. Just make sure to use the latest version of candy-machine-ui. Your env file is completly correct btw.

Google gapi.auth2.getAuthInstance().isSignedIn().get() is always false when multiple google accounts

I am having a weird problem with google gapi auth. For some reason, the value for gapi.auth2.getAuthInstance().isSignedIn().get() is always returning false. This is my setup:
gapi.load("auth2", initAuth2);
initAuth2(){
gapi.auth2.init({
client_id: "xxxxx-yyyyy.apps.googleusercontent.com",
hosted_domain: "domain.com",
redirect_uri: "http://localhost:4200",
ux_mode: "redirect",
}).then(performAuth, error=>{
console.error(`Error initiating gapi auth2: ${error.details}`);
});
}
performAuth(googleAuth){
const isSignedIn = googleAuth.isSignedIn.get();
if(!isSignedIn){
googleAuth.signIn();
return;
}
const user = googleAuth.currentUser.get();
console.log(user);
}
I have two google workspace accounts sign in the same chrome profile. When I run this script, I get the prompt to select an account. No matter which one I choose, the flow just keeps looping. The reason for that is that the line const isSignedIn = googleAuth.isSignedIn.get(); is always returning false.
Things I've tried so far:
I thought that maybe the client_id was corrupted so I generated a new one. Same behaviour.
I though the GCP project was corrupted, so I created a new project with new credentials. Same behaviour.
Thought there was an issue with cookies, so I deleted and clear cookies and history. Same behaviour.
Thought is was related only to localhost so I deployed to the web. Same behaviour.
If I change the init options from ux_mode: "redirect" to ux_mode: "prompt". It works. However, that is not the desired experience. Also, if I only have one google workspace in the chrome profile, it works. Even more interesting... if I use a client id from an older project... it works! The problem is that the consent screen shows the wrong app name.
I know this question is similar to this one, however I feel it's different because none of the above troubleshooting works. Any insights?
There is a case for the exact same problem here, it is most likely a bug in the api set

Uncaught Error: Returned values aren't valid, did it run Out of Gas?

I'm listening to events of my deployed contract. Whenever a transaction gets completed and event is fired receiving the response causes following error:
Uncaught Error: Returned values aren't valid, did it run Out of Gas?
at ABICoder.push../node_modules/web3-eth-abi/src/index.js.ABICoder.decodeParameters
(index.js:227)
at ABICoder.push../node_modules/web3-eth-abi/src/index.js.ABICoder.decodeLog
(index.js:277)
Web3 version: 1.0.0-beta36
Metamask version: 4.16.0
How to fix it?
Try the command truffle migrate --reset so that all the previous values are reset to their original value
Throws the same error when inside a transaction it generates different events with the same name and the same arguments. In my case, this was the Transfer event from ERC721 and ERC20. Renaming one of them solves this problem, but of course this isn't the right way.
This is a bug in web3js, discussed here.
And the following change fixes it (source):
patch-package
--- a/node_modules/web3-eth-abi/src/index.js
+++ b/node_modules/web3-eth-abi/src/index.js
## -280,7 +280,7 ## ABICoder.prototype.decodeLog = function (inputs, data, topics) {
var nonIndexedData = data;
- var notIndexedParams = (nonIndexedData) ? this.decodeParameters(notIndexedInputs, nonIndexedData) : [];
+ var notIndexedParams = (nonIndexedData && nonIndexedData !== '0x') ? this.decodeParameters(notIndexedInputs, nonIndexedData) : [];
var returnValue = new Result();
returnValue.__length__ = 0;
Edit: Also downgrading to web3-1.0.0.beta33 also fixes this issue too.
Before even checking your ABI or redeploying, check to make sure Metamask is connected to whichever network your contract is actually deployed too. I stepped away and while i was afk Metamask logged out, I guess I wasn't watching closely and I was connected to Ropsten when I working on localhost. Simple mistake, wasted an hour or so trying to figure it out. Hope this helps someone else out!
This happened to me on my react app.
I deployed to contract to Ropsten network, but metamask was using the Rinkeby aaccount. So make sure whichever network you deployed, metamask should be using account from that network.
The solution for me was changing of provider. With Infura the error is gone, but with Alchemy is still happening.
Please check your Metamask Login, This issue is generally populated when you are either log out of the Metamask or worse case you have 0 ether left at your account.
This can also happen when the MNEMONIC value from Ganache is different from the one you have in your truffle.js or truffle-config.js file.

unable to connect to back4app using parse4cn1 lib

Good day all, I tried connecting to the back4app back end service using the new parse4cn1 lib .....I supplied the keys required but my app could not connect to the backend for some strange reasons.kept reporting (unable to connect to back end.........codename one revisions some long numbers)
Some help or direction will be appreciated.Thanks all
What keys are you using? You need to pass your App Id and Client Key to parse4cn1. I just ran the regression tests against back4app and I didn't get any connection error. Can you provide more details (e.g. a dump) of the error you're getting?
Where are you putting your Parse.initialize? Just went for some digging on parse4cn1 and it seems that it needs to be inside a "initVars" function, usually created within a state machine as you can see in the example below:
public class StateMachine extends StateMachineBase {
/**
* this method should be used to initialize variables instead of
* the constructor/class scope to avoid race conditions
*/
protected void initVars(Resources res) {
Parse.initialize(API_ENDPOINT, APP_ID, CLIENT_KEY);
}
}
Maybe that can help you on this connection issue. Also check the link below (A very useful guide) for further info:
https://github.com/sidiabale/parse4cn1/wiki/Usage-Examples

apiKey key ID and secret is required even though they're there in express-stormpath

I'm trying to use express-stormpath on my Heroku app. I'm following the docs here, and my code is super simple:
var express = require('express');
var app = express();
var stormpath = require('express-stormpath');
app.use(stormpath.init(app, {
website: true
}));
app.on('stormpath.ready', function() {
app.listen(3000);
});
I've already looked at this question and followed the Heroku devcenter docs. The docs say that for an Heroku app, it's not necessary to pass in options, but I've still tried passing in options and nothing works. For example, I've tried this:
app.use(stormpath.init(app, {
// client: {
// file: './xxx.properties'
// },
client: {
apiKey: {
file: './xxx.properties',
id: process.env.STORMPATH_API_KEY_ID || 'xxx',
secret: process.env.STORMPATH_API_KEY_SECRET || 'xxx'
}
},
application: {
href: 'https://api.stormpath.com/v1/applications/blah'
},
}));
To try and see what's going on, I added a console.log line to the stormpath-config strategy valdiator to print the client object, and it gives me this:
{ file: './apiKey-xxx.properties',
id: 'xxx',
secret: 'xxx' }
{ file: null, id: null, secret: null }
Error: API key ID and secret is required.
Why is it getting called twice, and the second time around, why does the client object have null values for the file, id and secret?
When I run heroku config | grep STORMPATH, I get
STORMPATH_API_KEY_ID: xxxx
STORMPATH_API_KEY_SECRET: xxxx
STORMPATH_URL: https://api.stormpath.com/v1/applications/[myappurl]
I'm the original author of the express-stormpath library, and also wrote the Heroku documentation for Stormpath.
This is 100% my fault, and is a documentation / configuration bug on Stormpath's side of things.
Back in the day, all of our libraries looked for several environment variables by default:
STORMPATH_URL (your Application URL)
STORMPATH_API_KEY_ID
STORMPATH_API_KEY_SECRET
However, a while ago, we started upgrading our libraries, and realized that we wanted to go with a more standard approach across all of our supported languages / frameworks / etc. In order to make things more explicit, we essentially renamed the variables we look for by default, to:
STORMPATH_APPLICATION_HREF
STORMPATH_CLIENT_APIKEY_ID
STORMPATH_CLIENT_APIKEY_SECRET
Unfortunately, we did not yet update our Heroku integration or documentation to reflect these changes, which is why you just ran into this nasty issue.
I just submitted a ticket to our Engineering team to fix the names of the variables that our Heroku addon provisions by default to include our new ones, and I'm going to be updating our Heroku documentation later this afternoon to fix this for anyone else in the future.
I'm sincerely sorry about all the confusion / frustration. Sometimes these things slip through the cracks, and experiences like this make me realize we need better testing in place to catch this stuff earlier.
I'll be working on some changes internally to make sure we have a better process around rolling out updates like this one.
If you want a free Stormpath t-shirt, hit me up and I'll get one shipped out to you as a small way to say 'thanks' for putting up with the annoyance: randall#stormpath.com
After endless hours, I managed to finally get it working by removing the add-on entirely and re-installing it via the Heroku CLI and then exporting variables STORMPATH_CLIENT_APIKEY_ID and STORMPATH_CLIENT_APIKEY_SECRET. For some reason, installing it via the Heroku Dashboard causes express-stormpath to not find the apiKey and secret fields (even if you export variables).

Resources