How can you tell if a solana node is synced? - solana

I'm running a solana node using the solana-validator command (see Solana docs).
And I'd like to know if my validator is ready to connect to the http/rpc/ws port. What's the quickest way to do check to see if it's synced?
Currently, I'm using wscat to check to see if I can connect to the websocket, but am unable to. I'm not sure if that's because the node isn't setup right, or it's not synced, etc.
I know if I run solana gossip I should be able to see my IP in the list that populates... but is that the best way?

Run a curl command against your node for block height:
curl http://<IP>:<PORT> -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getBlockHeight"}'
And you'll get an output like:
{"jsonrpc":"2.0","result":105334870,"id":1}
Compare this to some other node, like a block explorer, and see if you're up to speed.

Take a look at solana catchup, which does exactly what you're asking for: https://docs.solana.com/cli/usage#solana-catchup

Related

Consul crud operation commands for delete and get all

I have consul entries in my kubernetes cluster. I need to perform CRUD operations. These are the commands I know, but I need to Get all and Delete and this has to be done with HTTP requests and not with consul cli.
GET - curl -k -X GET /?token=
GET ALL -
CREATE/PUT - curl -k --request PUT --data '' /?token=
DELETE -
can someone please help me to find those two empty commands?
I provided an answer here https://stackoverflow.com/a/61491161/12384224 on how you can retrieve all values from Consul's KV store.
Consul's KV Store API docs contain curl examples for creating and deleting a given key. Take a look at Consul's Transaction API documentation if you are looking for a way to create and delete keys en-masse.

accessing F5 load balancer using unix script

I'm new to F5 load balancers. Is there anyway I can stop/start servers in the F5 pool using unix scripts?
Thanks,
Santosh
If you are going to stop/start pool members (nodes) directly on the BIG-IP, you can use the TMSH commands within the script. In this case:
Force Node Offline: >tmsh modify /ltm node <nodename> state user-down session user-disabled - This will prevent new connections from occuring but will not drop existing connections (will not drain)
Delete Existing Connections: >tmsh delete /sys connection ss-server-addr <nodeIP> - This will force-drain any existing connections from the node (something to do after you force offline and there are persistent connections preventing maintenance)
Enable Node: >tmsh modify /ltm node <nodename> state user-up session user-enabled - This will return the node to accepting traffic from any disabled state.
After changing a configuration you'll want to tmsh save /sys config.
If you want to manage these attributes remotely, you can use the iControlREST API via curl or if you want, there's a python SDK available to make use of REST commands within your py scripts.
Curl example: >curl -sk -u XXXXX:XXXX https://bigp_ip_addr/mgmt/tm/ltm/node/~Common~NODE/ -H "Content-Type: application/json" -X PUT -d '{"state": "user-down", "session": "user-disabled"}'
Here are the available BIG-IP TMSH commands you can use within your script (DevCentral login required) and here is how to use the BIG-IP iControlREST API. I use this one myself so I can run simple scripts remotely to manage common objects. Here are the BIG-IP iControlREST commands specific to node management (again, DevCentral login required).
Hope this gets you where you need to be.

How do I use the public swagger-generator docker image to generate a client?

We have a fully dockerized web app with a valid Swagger definition for the API. The API runs in its own docker container, and we're using docker-compose to orchestrate everything. I want to generate a Ruby client based on the Swagger definition located at http://api:8443/apidocs.json.
I've poured through the documentation here, which led me to Swagger's public docker image for generating client and server code. Sadly the documentation is lacking and offers no examples for actually generating a client with the docker image.
The Dockerfile indicates its container runs a web service, which I can only assume is the dockerized version of http://generator.swagger.io. As such, I would expect to be able to generate a client with the following command:
curl -X POST -H "content-type:application/json" -d \
'{"swaggerUrl":"http://api:8443/apidocs"}' \
http://swagger-generator:8080/api/gen/clients/ruby
No luck here. I keep getting "invalid swagger definition" even though I've confirmed the swagger definition is valid with (npm -q install -g swagger-tools >/dev/null) && swagger-tools validate http://api:8443/apidocs.
Any ideas?
indeed you are correct, the docker image you're referring to is the same image used at http://generator.swagger.io
The issue you're having is the input parameter isn't correct.
Now to get it right, please note that the swagger-generator has a web interface. So once you start it up, like the instructions say, open it in a browser. For example (replace the GENERATOR_HOST with your machine's IP address):
docker run -d -e GENERATOR_HOST=http://192.168.99.100 -p 80:8080 swaggerapi/swagger-generator
then you can open the swagger-ui on http://192.168.99.100
The important part here is that you can use the UI to see the call syntax. If you're generating a client, go to http://192.168.99.100/#!/clients/generateClient select the language you want to generate and click the payload on the right. Replace the swaggerUrl field with the address of your server and voila.
You can use the output in the curl to figure out how to call from the command line. It should be easy from there.
Please keep in mind that just because a 3rd party tool says the swagger definition is valid doesn't mean it actually is. I don't think that's your issue, though, but 3rd party tool mileage may vary...

Installing Elasticsearch

I try to index data on Elasticsearch, my problem is;after run the "elasticsearch.bat" command, I able to connect to the server, all process well done.but after that I cant write anything to the command line.Do you have any idea what is wrong?
Thats Ok, you see the ElasticSearch console output. Just open an other console to make some input. Or start ElasticSearch as a service (http://www.elastic.co/guide/en/elasticsearch/reference/1.3/setup-service-win.html)
There is no command-line input available for Elasticsearch. You can do operations on Elasticsearch by REST commands (or with a Client API in for example JAVA).
You can use CURL (application) to do REST operations in a command line.
You can use the the internet browser to do some HTTP-GET commands. You can also do other REST commands (PUT, POST, DELETE) with some Chrome plugins like POSTMAN.
There are some Elasticsearch plugins available to enable monitoring and management tooling that becomes available via the browser.
Please read the Elasticsearch documentation!
For all operations on indexes, mappings, querying, etc, the Marvel plugin has the Sense REST API interface which is fabulous. Sense is wrapped within the Marvel plugin which is free for development.
It allows you to execute all possible ES API commands as JSON. We use it both as a way to prototype commands before implementing them in our ES client, and as a way to test very specific/boundary search scenarios.
There are lots of other cool plugins to help you manage your ElasticSearch, some of which are described here.
Good luck!
When you type only elasticsearch.bat, it means you are starting Elasticsearch server in the foreground, that´s why you are seeing real-time logs in your terminal and hence you can´t type anything.
Now, leave that unclosed and open another terminal (no need to go to the Elasticsearch directory again) and just type
curl 'http://localhost:9200/?pretty' but first make sure that curl is supported in your terminal, if not, you need to use another terminal that supports it, for example Git Shell for Windows.
Afterwards you can use this second terminal to do your indexing.
Via terminal with command curl + XGET (or XPUT, XDELETE, XPOST) you can send commands to elasticsearch:
curl -XGET 'http://localhost:9200/your_index' -d '{
"query":
{
"filtered":
{
"query":
{
"match_all": {}
}
}
}
}';
You can also use the Chrome extension Sense, which can handle JSON configs (with handy history, nice highlighting).
I think you have misunderstood something:
ElasticSearch runs as a http service, that the reason why you cant still using that console.
Solution: just open another console.
But, keep in mind you dont need to use a console, you can access it using any REST Client. Take a look at "Postman - REST Client" and "Sense (Beta)". Both are Chrome extensions.

Hadoop webhdfs curl create file

I use ubuntu 12, Hadoop 1.0.3, i use webhdfs curl to create file.
curl -i -X PUT "http://localhost:50070/webhdfs/v1/test.txt?op=CREATE
or use the
curl -i -X PUT -T /home/hadoop/TestFile/test.txt "http://localhost:50070/webhdfs/v1/test?op=CREATE"
The two commend result is
HTTP/1.1 307 TEMPORARY_REDIRECT
What set lack of the hdfs-site.xml? Or other permission not set?
Thanks!
According to the docs for Web HDFS, this is expected:
http://hadoop.apache.org/common/docs/stable/webhdfs.html#CREATE
When you make the first put, you'll be given a temporary redirect URL of the datanode to which you can then issue another PUT command to actually upload the file into HDFS.
The document also explains the reasoning behind this 2-step create method:
Note that the reason of having two-step create/append is for preventing clients to send out data before the redirect. This issue is addressed by the "Expect: 100-continue" header in HTTP/1.1; see RFC 2616, Section 8.2.3. Unfortunately, there are software library bugs (e.g. Jetty 6 HTTP server and Java 6 HTTP client), which do not correctly implement "Expect: 100-continue". The two-step create/append is a temporary workaround for the software library bugs.
I know i am replying very late but anyone else looking for answers here will be able to see it.
Hi #Krishna Pandey this is the new link for WebHDFS
https://hadoop.apache.org/docs/r1.0.4/webhdfs.html#CREATE
You can refer to this blog for steps and commands https://wordpress.com/post/had00ping.wordpress.com/194

Resources