I'm trying to create a solana wallet using solana-keygen and then check it's balance
With this line I create the wallet with the spesific outfile
C:\Users\Ali Berkin>solana-keygen new --force -o "C:\Users\Ali Berkin\Documents\Solana\test.json"
It generated the wallet successfully and outputted me this
Generating a new keypair
For added security, enter a BIP39 passphrase
NOTE! This passphrase improves security of the recovery seed phrase NOT the
keypair file itself, which is stored as insecure plain text
BIP39 Passphrase (empty for none):
Wrote new keypair to C:\Users\Ali Berkin\Documents\Solana\test.json
================================================================================
pubkey: CgvYXNqdVLvNvByFXiSkFGfRC3QFR9SGZq17Bq1bRdht
================================================================================
then I saved this keypair as my default keypair
C:\Users\Ali Berkin>solana config set --keypair "C:\Users\Ali
Berkin\Documents\Solana\test.json"
Config File: C:\Users\Ali Berkin\.config\solana\cli\config.yml
RPC URL: https://metaplex.devnet.rpcpool.com/
WebSocket URL: wss://metaplex.devnet.rpcpool.com/ (computed)
Keypair Path: C:\Users\Ali Berkin\Documents\Solana\test.json
Commitment: confirmed
Finally, when I tried to check my balance, it threw an error which looks like this
C:\Users\Ali Berkin>solana balance
Error: Dynamic program error: No default signer found, run "solana-keygen new -o C:\Users\Ali Berkin\Documents\Solana\test.json" to create a new one
I already created a keypair at C:\Users\Ali Berkin\Documents\Solana\test.json but error tells me to create one. Can someone help me with this?
Apparently, the space on my username caused the problem. I tried opening a new wallet in C:\solana and it seems to work now.
C:\solana>solana-keygen new --force -o "C:\solana\test.json"
Generating a new keypair
For added security, enter a BIP39 passphrase
NOTE! This passphrase improves security of the recovery seed phrase NOT the
keypair file itself, which is stored as insecure plain text
BIP39 Passphrase (empty for none):
Wrote new keypair to C:\solana\test.json
pubkey: ASgogsZ7WW6uuGQYFX6BwfjwrEytzNJt4f9pVyp9gaaN
C:\solana>solana config set --keypair "C:\solana\test.json"
Config File: C:\Users\Ali Berkin\.config\solana\cli\config.yml
RPC URL: https://metaplex.devnet.rpcpool.com/
WebSocket URL: wss://metaplex.devnet.rpcpool.com/ (computed)
Keypair Path: C:\solana\test.json
Commitment: confirmed
C:\solana>solana balance
0 SOL
Related
I am trying to connect to a topic that requires SSL. Can anyone suggest an approach to achieve this?
I have a ca.crt, ca.p12 and password provided by the kafka cluster administrator.
I can connect just fine using a test utility that leverages the keytool/keystore, but these don't work for Go.
The code to connect to the cluster is as follows:
func (p *KafkaPublisher) Initialise() {
configFile := "./config/kafka.properties"
fmt.Printf("Reading config file from: %s\n", configFile)
conf := ReadConfig(configFile)
var err error
kafkaProducer, err = kafka.NewProducer(&conf)
// rest of code commented out.
}
And this works just fine for SASL using the following config file:
bootstrap.servers=markets-cb--vog--pu-taejvaba.bf2.kafka.rhcloud.com:443
security.protocol=SASL_SSL
sasl.mechanisms=PLAIN
# Service account user name
sasl.username=700e9c83-b4be-4f23-8697-b6cfa5921354
sasl.password=ab955a0d-e78d-4d96-bd8e-35de3b6b83e5
# Best practice for Kafka producer to prevent data loss
acks=all
I reviewed the following documentation to try and find the right properties:
https://docs.confluent.io/platform/current/clients/librdkafka/html/md_CONFIGURATION.html#autotoc_md91
and have been unsuccessful
bootstrap.servers=my-cluster-kafka-listener1-bootstrap-climate.violet-cluster-new-2761a99850dd8c23002367ac6ce7f9ad-0000.au-syd.containers.appdomain.cloud:443
security.protocol=SSL
ssl.key.password=ebuSuzkDbfFK
ssl.certificate.location="ca.pem"
# Best practice for Kafka producer to prevent data loss
acks=all
I have also tried putting the pem file inline as a single quote-delimited line using the property ssl.certificate.pem. This produced the the most encouraging result with an error:
Failed to create producer: ssl.certificate.pem failed: not in PEM format?: error:0909006C:PEM routines:get_name:no start line: Expecting: CERTIF
But it is a PEM file format...
Here is the certificate (Don't worry, there is no security risk with me pasting this in here as the cert is now removed from the server and I changed the url and password above.)
-----BEGIN CERTIFICATE-----
MIIFLTCCAxWgAwIBAgIUBT4au51IElFmVL4RenvrRsFpwiwwDQYJKoZIhvcNAQEN
BQAwLTETMBEGA1UECgwKaW8uc3RyaW16aTEWMBQGA1UEAwwNY2x1c3Rlci1jYSB2
MDAeFw0yMjA3MTkwNTE4NDBaFw0yMzA3MTkwNTE4NDBaMC0xEzARBgNVBAoMCmlv
LnN0cmltemkxFjAUBgNVBAMMDWNsdXN0ZXItY2EgdjAwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQDdghbY97oYE5A1GcGccMMp01RW3DSsl3tZ3U/Q2YKY
IDkqwkITevXu0WjUHh7v/659xwryNMtUjlz8JF4MZQnZwq1xEX6ldA+/+JeG2pJE
eEnFXPvP9meDfi2N5bQC/At5N5ZAca4jfrKVognzgHMj1JMwXtTLu4Jw73Za+dRg
y30I51x33zPTYgq+5QKTssOvvAQp+OGsf2ts1s3P5weOLJ3tfGUrVdhoblMB+RUl
TF7KIuknFY0+sNRUJSeuw29qUdH9KFJ3bYBMEF2afSybS4DlFSrs7Od0ZsPWnONn
ZRV/SdjpvjP13k32XqzX8O1h0oOnHOPExE7OQgjRVcluJh3aJbSEAb9DnAUCRX88
/ZA2tlDWGxJIv94CruKDaF6vMaupUUXmQG0Irk8cfGKHgwjvG2HL2U/oaJpPEtdo
Zu2TgeHxF/k9YBfzKC6ZlZrwQLN1iL+mAL45ql1OYyGVWPS6NWUSpJA9FS7qJlfr
o7hiD/wg4xubtGntBCui+jdrRoDwcVQvk5+pZYNchK26oT4qjY57YhXLWByesg+c
MUSDTgtU//DHfAq6VyJqMbnP68RwW1MRMg1lbiMUTJ8wTsKgBm6fZ1E2+Jg/wJtq
FhLTBjiB6Fa2m6aZQdqaN/RMu9mUNjE45b4GVIFayouS1ejGoNNsC2rv0n4CQhNj
+wIDAQABo0UwQzAdBgNVHQ4EFgQU2YAwObuT5NpEivgwO+j+Z/qSn1gwEgYDVR0T
AQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQENBQADggIB
AB+18g/YjUYpLbBrQgzvHhNP0THKUzVQ5ze15JZfaQHdOf6XYhAMpnHm/mBrXM2B
5ZHwkQzCuHfZmSikeVnr/thloh1+2NHdf9s4XQtxZiouZxzNnbX3Hf/gviX/lvm2
p8twSqxsMnI00x7jPDGDmZBF5bX7Mtp3c7gD3K4goz8InHdn8j1jxhYg0fQdX8j3
ryoy+hWCkaW1PPYxGgrmdJg7kiffTBw3jx9+Md11EXb7+ryZeUEsI6MqGQSg6F94
U+nPWV/qBErEbe5iNISdOkUK8wcjk/IOVAps1CJs8BECcCaReGwVpC8twxx9c8BY
DEW76Y8J9syxgxUZeEywoqguxc80SSuTLXcNkAoifdReUeW/b08cJffP55nztfbE
Fhg4E5vYj41q2Z5iOI1sZsY22Z4VszW0Fl11DcloM4/088W6O3Lp3Jo0rMu/k14/
DNr5AM8Lrgno947S1OWZ87Q1IF8zlayM+c5XWRHO64jBHouTo1HvDodudaF+XYxv
F7xRVUehnnACQExy2OYeOkjtxmsinQDfZcvvj7b7NfCqytM3IjB1jk9GxeoFgKYj
/n9WFjHWtSnC+nsyZo2c37XeaHbBEtls3LHXb6+OmOtiTzw0C8TEQJ4AcqaPNk1k
knHg6PPWBenWskC9KH898c6vvhZ5/VHSWXJG6f8GxWya
-----END CERTIFICATE-----
I am trying to deploy the demo smart contract on solana for chainlink price feed but getting an error. I followed all the steps from https://docs.chain.link/docs/solana/using-data-feeds-solana/
$ anchor deploy --provider.wallet ./id.json --provider.cluster devnet
Deploying workspace: https://api.devnet.solana.com
Upgrade authority: ./id.json
Deploying program "chainlink_solana_demo"...
Program path: /home/test/solana-starter-kit/target/deploy/chainlink_solana_demo.so...
=============================================================================
Recover the intermediate account's ephemeral keypair file with
`solana-keygen recover` and the following 12-word seed phrase:
=============================================================================
until reason almost can clean wish trend buffalo future auto artefact balcony
=============================================================================
To resume a deploy, pass the recovered keypair as the
[BUFFER_SIGNER] to `solana program deploy` or `solana write-buffer'.
Or to recover the account's lamports, pass it as the
[BUFFER_ACCOUNT_ADDRESS] argument to `solana program close`.
=============================================================================
Error: Custom: Invalid blockhash
There was a problem deploying: Output { status: ExitStatus(unix_wait_status(256)), stdout: "", stderr: "" }.
This is just a timeout error message. It happens from time to time depending on the network availability
Follow these steps and run again. I've also found that you'll want quite a bit more sol in your wallet than you think you need.
To resume a deploy, pass the recovered keypair as the
[BUFFER_SIGNER] to `solana program deploy` or `solana write-buffer'.
Or to recover the account's lamports, pass it as the
[BUFFER_ACCOUNT_ADDRESS] argument to `solana program close`.
I'm working on a sample application where I want to connect to the Hashicorp vault to get the DB credentials. Below is the bootstrap.yml of my application.
spring:
application:
name: phonebook
cloud:
config:
uri: http://localhost:8888/
vault:
uri: http://localhost:8200
authentication: token
token: s.5bXvCP90f4GlQMKrupuQwH7C
profiles:
active:
- local,test
The application builds properly when the vault server is unsealed. Maven fetches the database username from the vault properly. When I run the build after sealing the vault, the build is failing due to the below error.
org.springframework.vault.VaultException: Status 503 Service Unavailable [secret/application]: error performing token check: Vault is sealed; nested exception is org.springframework.web.client.HttpServerErrorException$ServiceUnavailable: 503 Service Unavailable: [{"errors":["error performing token check: Vault is sealed"]}
How can I resolve this? I want maven to get the DB username and password during the build without any issues from the vault even when though it is sealed.
It's a profit of Vault that it's not simple static storage, and on any change in the environment, you need to perform some actions to have a stable workable system.
Advice: create a script(s) for automation the process.
Example. I have a multi-services system and some of my services use Vault to get the configuration.
init.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault operator unseal <token1>
vault operator unseal <token2>
vault operator unseal <token3>
vault login <main token>
vault secrets enable -path=<path>/ -description="secrets for My projects" kv
vault auth enable approle
vault policy write application-policy-dev ./application-policy-DEV.hcl
application.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault login <main token>
vault delete <secret>/<app_path>
vault delete sys/policy/<app>-policy
vault delete auth/approle/role/<app>-role
vault kv put <secret>/<app_path> - < <(yq m ./application.yaml)
vault policy write <app>-policy ./<app>-policy.hcl
vault write auth/approle/role/<app>-role token_policies="application-policy"
role_id=$(vault read auth/approle/role/<app>-role/role-id -format="json" | jq -r '.data.role_id')
secret_id=$(vault write auth/approle/role/<app>-role/secret-id -format="json" | jq -r '.data.secret_id')
token=$(vault write auth/approle/login role_id="${role_id}" secret_id=${secret_id} -format="json" | jq -r '.auth.client_token')
echo 'Token:' ${token}
where <app> - the name of your application, application.yaml - file with configuration, <app>-policy.hcl - file with policy
Of course, all these files should not be public, only for Vault administration.
On any changes in the environment or Vault period termination just run init.sh. For getting a token for the application run application.sh. Also if you need to change a configuration parameter, change it in application.yaml, run application.sh and use result token.
Script result (for one of my services):
Key Value
--- -----
token *****
token_accessor *****
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
Success! Data deleted (if it existed) at: <secret>/<app>
Success! Data deleted (if it existed) at: sys/policy/<app>-policy
Success! Data deleted (if it existed) at: auth/approle/role/<app>-role
Success! Data written to: <secret>/<app>
Success! Uploaded policy: <app>-policy
Success! Data written to: auth/approle/role/<app>-role
Token: s.dn2o5b7tvxHLMWint1DvxPRJ
Process finished with exit code 0
I'm trying to build a linux kernel for my Arch install. I'd like to verify the signatures but find I can't get the keys needed to do that:
[joemadeus#<host>]$ gpg2 -vvv --locate-keys torvalds#kernel.org gregkh#kernel.org
gpg: using character set 'iso-8859-1'
gpg: using pgp trust model
gpg: key <HEX CHARS HERE> accepted as trusted key
gpg: error retrieving 'gregkh#kernel.org` via Local: No public key
gpg: error retrieving 'gregkh#kernel.org` via WKD: No data
gpg: error reading key: No data
gpg: error retrieving 'torvalds#kernel.org` via Local: No public key
gpg: error retrieving 'torvalds#kernel.org` via WKD: No data
gpg: error reading key: No data
Obviously these keys are there and something is wrong with the way I'm going after them. Unfortunately there's nothing here that gives me any hints, even with verbose turned on. And, searching about I find... nothing.
I do have connectivity to the outside world and can get to kernel.org via http without any trouble. In fact, that's where I found out how to get the keys: https://www.kernel.org/category/signatures.html I've tried several times over the last couple of days, so I don't think kernel.org is having problems (unless they're very long-lived ones.)
I have created a key for myself with this login on the local system. I haven't pushed it out anywhere. I don't know if any of that matters.
Any hints?
The solution comes from this post, found by a friend of mine, which answers a similar question (but with a different error message): https://askubuntu.com/a/1027703
The default gpg config on Arch does not include:
auto-key-locate cert,pka,dane,wkd,keyserver
...and I did not supply it on the command line (didn't know it existed.) Once this option was specified it found the keys.
The easiest way of importing the key of Linus and Greg is by fetching it directly by fingerprint as found by your link.
Fetch Linus Torvalds key:
gpg --search-keys ABAF11C65A2970B130ABE3C479BE3E4300411886
Fetch Greg Kroah-Hartmans key:
gpg --search-keys 647F28654894E3BD457199BE38DBBDC86092693E
This approach also makes it easier to ensure you fetch the correct keys and not any key published with the email addresses.
The former xxx.BrokerImport is expired, and I generate a new key with the same name 'xxx.Import' and import it into remote server. But I can't delete the former one. They have same name, when I use 'xxx.Import' to encrypt, it will failed, I guess it used the former one not the new import one.
I want to delete one expired key in remote server.
Use root user to execute commands:
[root#ip-xxx xxx_ansible]#gpg --delete-key B7C1CB35
But get following error:
gpg: WARNING: unsafe ownership on homedir `/XXX/XXX_Import_tools/Keys'
I used root user to execute this, no idea why I haven't permission.
And I try:
[root#ip-xxx xxx_ansible]# sudo gpg --delete-key B7C1CB35
then get another error:
gpg: key "B7C1CB35" not found: Unknown system error
gpg: B7C1CB35: delete key failed: Unknown system error
However the public key is exist.
[root#ip-xxx xxx_ansible]# gpg --list-keys
gpg: WARNING: unsafe ownership on homedir `/xxx/xxx_Import_tools/Keys'
/xxx/xxx_Import_tools/Keys/pubring.gpg
------------------------------------------------
pub 2048R/B7C1CB35 2016-05-12 [expired: 2018-04-24]
uid xxx.Import <xxx#xxx.com>
pub 2048R/B75F015E 2018-07-23
uid xxx.Import <xxx#xxx.com>
sub 2048R/65AED995 2018-07-23
Does anyone has idea about this? Hope to get your help.
Since I have resolve this issue, I'd like to share my solution.
I want to delete the key with command directly, but due to permission deny, I delete the pubring.gpg / secring.gpg / trustdb.gpg in remote server. And After next deployment, these key will be import by ansible script. And these file will be generated.