Increase number of Nodes in Network but limit number TCP enabled - yaml

I am building off the XO game from the sawtooth SDK and am running into several issues. First, I am able to sucessfully modify the XOHandler to modify the type of data I am uploading to the network. I have been working with the shell default so I am a little unfamiliar with how the other containers work. I am running some tests and need to run a network with 100, 500 and, 1000 nodes respectivly. I am using a docker enviorment and the yaml file that specifies 5 total validators. For my network I am doing a quorum based consensus so I only want maybe 10 validators that are tcp enabled. Is their a good way of simulating this scenario? I thought I would just need to add a bunch of validators directly in the yaml file but I found this in the Sawtooth documentation. https://sawtooth.hyperledger.org/docs/1.2/sysadmin_guide/pbft_adding_removing_node.html#adding-a-pbft-node-label
This is confusing to me because it sounds like these are the steps taken for the Ubuntu Sawtooth install. I just copied the sawtooth core off of github so I am not sure which steps I should be following. I also have the issue of the yoml file. I found some stuff mentioning that this file stored some information about the validators however when I go the ./etc/sawtooth/ directory inside the docker their is no yoml file. Am I in the wrong place? That is the first question.
Second question concerns the Dependencies array in the transaction header. My network is designed in a way that I will need dependencies set between different blocks. I am also doing a one block per batch network. Whenever I would add dependencies however I run a RepeatGossip error and it says the block has already been added. My input and output fields are the same. Any feedback on this message? Of course the parent has already been added but specifying the parents in dependencies shouldn't block the block upload.

Related

Omnetpp wirelesshost sending useful information

I am new to Omnetpp, and I am trying to send messages from one node to another wirelessly.
Basically, I would like to do something as in the tictoc example of Omnetpp (https://docs.omnetpp.org/tutorials/tictoc/) but then wirelessly.
I have installed INET already, and I have seen the wireless example, which uses the UdpBasicAPP. However, I do not know how to change the data of the message send while using the UdPBasicAPP. In my case, what I am sending (i.e. the data) is very important because it is part of a bigger project. Eventually, the idea is to use the 802.11p standard (which exists in VEINS) and multiple nodes, but I thought this was a good place to start.
I hope someone can help me out.
Kind regards
just to be aware: 802.11p is also supported directly in INET. Just set the opMode parameter on the network interface.
You will need to create your own application module. Take a look/copy UdpBasicApp and modify it according to your needs. Check the sendPacket() function which creates an ApplicationPacket. ApplicationPacket contains only a single sequence number, but you can create your own application level data structure and use that for sending.

How to increase resource allocation to ravendb

I'm trying to process a document and store many documents into ravendb which I have running locally.
I'm getting the error
Tried to send *ravendb.BatchCommand request via POST http://127.0.0.1:8080/databases/mydb/bulk_docs to all configured nodes in the topology, all of them seem to be down or not responding. I've tried to access the following nodes: http://127.0.0.1:8080
I was able to fetch mydb topology from http://127.0.0.1:8080.
Fetched topology: ( url: http://127.0.0.1:8080, clusterTag: A, serverRole: Member)
exit status 1
To me, it sounds like maybe my local cluster is running out of compute to process the large amount of data I'm trying to store.
RavenDB says I'm using 3 of 12 available cores, and I'd also like to make sure it's using a reasonable amount of the ram I have available on the machine (I'd even be happy with giving it a swap)
But reading around online, I'm not finding much helpful information for making sure RavenDB is able to use what it needs. I found the settings.json so I can add in configurations which theoretically should get included into the server but I'm not making much progress.
I also found some settings and changed "reassign cores" to 12 but it says that still 3/12 are being used and 6/31.1 GB of memory are being used.
If an alternative solution is recommended I'm all ears. I just need to run things locally and storing everything as json's doesn't enable fast enough retrieval for my usecase.
Update
I was able to install mongodb and setup a local database. It hasn't given me any problems yet. RavenDB looks appealing if I understood it better but I guess I'll stick with the tried and true for this project.
It is highly unlikely that you managed to run out of resources on the server with 3 cores / 6 GB unless you are pushing hundreds of millions of documents and doing very heavy work.
Do you get any error on the server? There should be more details on the error or in the server log.

Process Laravel/Redis job from multiple server

We are building a reporting app on Laravel that need to fetch users data from a third-party server that allow 1 request per seconds.
We need to fetch 100K to 1000K rows based on user and we can fetch max 250 rows per request.
So the restriction is:
1. We can send 1 request per seconds
2. 250 rows per request
So, it requires 400-4000 request/jobs to fetch a user data, So, loading data for multiple users is very time-consuming and the server gets slow.
So, now, we are planning to load the data using multiple servers, like 4-10 servers to fetch users data, so we can send 10 requests per second from 10 servers.
How can we design the system and process jobs from multiple servers?
Is it possible to use a dedicated server for hosting Redis and connect to that Redis server from multiple servers and execute jobs? Can any conflict/race-condition happen?
Any hint or prior experience related to this would be really helpful.
The short answer is yes, this is absolutely possible and is something I've implemented in production apps many times before.
Redis is just like any other service and can run anywhere, with clients from anywhere, connecting to it. It's all up to your configuration of the server to dictate how exactly that happens (and adding passwords, configuring spiped, limiting access via the firewall, etc.). I'd reccommend reading up on the documentation they have in the Administration section here: https://redis.io/documentation
Also, when you do make the move to a dedicated Redis host, with multiple clients accessing it, you'll likely want to look into having more than just one Redis server running for reliability, high availability, etc. Redis has efficient and easy replication available with a few simple configuration commands, which you can read more about here: https://redis.io/topics/replication
Last thing on Redis, if you do end up implementing a master-slave set up, you may want to look into high availability and auto-failover if your Master instance were to go down. Redis has a really great utility built into the application that can monitor your Master and Slaves, detect when the Master is down, and automatically re-configure your servers to promote one of the slaves to the new master. The utility is called Redis Sentinel, and you can read about that here: https://redis.io/topics/sentinel
For your question about race conditions, it depends on how exactly you write your jobs that are pushed onto the queue. For your use case though, it doesn't sound like this would be too much of an issue, but it really depends on the constraints of the third-party system. Either way, if you are subject to a race condition, you can still implement a solution for it, but would likely need to use something like a Redis Lock (https://redis.io/topics/distlock). Taylor recently added a new feature to the upcoming Laravel version 5.6 that I believe implements a version of the Redis Lock in the scheduler (https://medium.com/#taylorotwell/laravel-5-6-preview-single-server-scheduling-54df8e0e139b). You can look into how that was implemented, and adapt for your use case if you end up needing it.

Omnet++ How to connect up a network in my application

I am trying to build a simple simulation using omnet++. I'd like to avoid using things like ned. Instead I want to allocate the modules set up the simulation topology entirely under program control. (i.e. I'll configure my simulation and set up connections etc. in main() instead of using ned)
How do I go about doing this ? (any examples you can point me to?)
thanks
Create a top level network in NED and drop a single simple module called builder or something like that. Then create/connect the necessary modules in that module's initialize method (or schedule a message at t=0s and do the network buildup there).
There is an example in OMNeT++ that does exactly like this, in samples/routing. Choose the NetBuilder configuration. That example is reading the network topology from an external file, but you can change it to create any topology you would like to have.
The actual code for the network generation is in samples/routing/builder/netbuilder.cc in NetBuilder::buildNetwork

Best practice when using a Rails app to overwrite a file that the app relies on

I have a Rails app that reads from a .yml file each time that it performs a search. (This is a full text search app.) The .yml file tells the app which url it should be making search requests to because different version of the search index reside on different servers, and I occasionally switch between indexes.
I have an admin section of the app that allows me to rewrite the aforementioned .yml file so that I can add new search urls or remove unneeded ones. While I could manually edit the file on the server, I would prefer to be able to also edit it in my site admin section so that when I don't have access to the server, I can still make any necessary changes.
What is the best practice for making edits to a file that is actually used by my app? (I guess this could also apply to, say, an app that had the ability to rewrite one of its own helper files, post-deployment.)
Is it a problem that I could be in the process of rewriting this file while another user connecting to my site wants to perform a search? Could I make their search fail if I'm in the middle of a write operation? Should I initially write my new .yml file to a temp file and only later replace the original .yml file? I know that a write operation is pretty fast, but I just wanted to see what others thought.
UPDATE: Thanks for the replies everyone! Although I see that I'd be better off using some sort of caching rather than reading the file on each request, it helped to find out what the best way to actually do the file rewrite is, given that I'm specifically looking to re-read it each time in this specific case.
If you must use a file for this then the safe process looks like this:
Write the new content to a temporary file of some sort.
Use File.rename to atomically replace the old file with the new one.
If you don't use separate files, you can easily end up with a half-written broken file when the inevitable problems occur. The File.rename class method is just a wrapper for the rename(2) system call and that's guaranteed to be atomic (i.e. it either fully succeeds or fully fails, it won't leave you in an inconsistent in-between state).
If you want to replace /some/path/f.yml then you'd do something like this:
begin
# Write your new stuff to /some/path/f.yml.tmp here
File.rename('/some/path/f.yml.tmp', '/some/path/f.yml')
rescue SystemCallError => e
# Log an error, complain loudly, fall over and cry, ...
end
As others have said, a file really isn't the best way to deal with this and if you have multiple servers, using a file will fail when the servers become out of sync. You'd be better off using a database that several servers can access, then you could:
Cache the value in each web server process.
Blindly refresh it every 10 minutes (or whatever works).
Refresh the cached value if connecting to the remote server fails (with extra error checking to avoid refresh/connect/fail loops).
Firstly, let me say that reading that file on every request is a performance killer. Don't do it! If you really really need to keep that data in a .yml file, then you need to cache it and reload only after it changes (based on the file's timestamp.)
But don't check the timestamp every on every request - that's almost as bad. Check it on a request if it's been n minutes since the last check. Probably in a before_filter somewhere. And if you're running in threaded mode (most people aren't), be careful that you're using a Mutex or something.
If you really want to do this via overwriting files, use the filesystem's locking features to block other threads from accessing your configuration file while it's being written. Maybe check out something like this.
I'd strongly recommend not using files for configuration that needs to be changed without re-deploying the app though. First, you're now requiring that a file be read every time someone does a search. Second, for security reasons it's generally a bad idea to allow your web application write access to its own code. I would store these search index URLs in the database or a memcached key.
edit: As #bioneuralnet points out, it's important to decide whether you need real-time configuration updates or just eventual syncing.

Resources