How to set the endorsement policy after adding a new organization - hyperledger-composer

Starting from a running network with 3 organizations, I dynamically added a new organization (org4). But now, I want to update the endorsement policy for handling the new organization, and for handling some cases, as if the machine that contains org4 breaks down, the other organizations cannot make operations in the network. I tried to make an upgrade of the smart contract with the new endorsement policy, but it doesn't work. How can I solve this issue?
Thanks
Davide

Related

AutoScaling EC2 Instances

Im hoping to move an application to AWS.
I would like to use the AutoScaling so not all my EC2 instances are in use when the application use is quiet.
My problem is.....
I have one service account used for all communication between the various components of the application and the servers in that environment
We have a security exception with my company which allows us to use the service account to perform its actions on each individual server.
Every time we introduce a new server to the environment, we have to request that the security team update our exception list to allow the new server in as well.
There is no automatic method for doing this. We have to submit a request to the security team asking for the new server to be added to the exception.
So while AutoScaling would be prefect how can it work in this case if each time a server is added the security team needs to be notified so they can add the new server to the exception list?
Thanks
You can get notifications when your autoscale group scales either up or down. SNS can send a variety of things, including SMS (text) messages to a cell phone.
While this would work, it is incredibly manual. The goal of an autoscale group is to let the environment expand and contract without human intervention. I personally would not implement this as, depending on the availability of your security team they may be a bottle neck to scaling up. If for some reason they miss the scale up event that signals them to do something then you've got orphan machines that you're paying for that are doing nothing.
Additionally, there are also ways to script the provisioning of a new machine. Perhaps there is a way to add what you want automatically. AWS calls this userdata - you can learn a bit more about it from the AWS EC2 docs.
But ultimately I'd really take a step back and look at your architecture. If you can't script the machine provisioning then autoscaling is not very worthwhile - it's just plain "have devops add another machine if needed and hope they remember to take it down when it's not needed".

Preloading participant instances and issuing corresponding identities

What is a best practice or pattern that could be followed around the initial creation of participant instances and issuing corresponding identities for a deployed business network? Is the expectation there will be some sort of bootstrapped config or transaction that will preload participants...or will that be the responsibility of an admin / regulator to do this AFTER the network is deployed...or something all together different??? The docs talk about an existing participant (https://hyperledger.github.io/composer/managing/participantsandidentities.html) but how did that existing participant get loaded into the network?
We are tracking this requirement using this issue:
https://github.com/hyperledger/composer/issues/670

Active Directory Domain Services Auditing

I'll try to explain my goal as good as I can;
I want to trigger a script whenever there is a new computer added to a Organizational Unit.
To do this i need to activate the logging of this event under the local security policy/audit policy. I guess my question is, do I need to do this on all the domain controllers, or is it enough to do it one just one?
Also, is it possible to see the event from a member server with the Management Tools pack installed? As I don't want to put too much work on the Domain Controllers.
Here is the Microsoft article that gives 4 ways of tracking changes in Microsoft Active-Directory. You will find everything you need from configuring the eventlog to receiving notifications by way of different kind of polling.

Is it possible to setup a private TFS checkin policy?

As I understand the checkin-policy are set on the server, but enforced on the client.
Since they are enforced on the client, is there a way to setup a private checkin policy.
I want to have a checkin policy for myself, but I can't set it on the TFS server as that would force everyone to follow that policy as well.
No that is not possible. If you're creating custom check-in policies you could have it filter by user, but you'd still have to turn it on at the server level and have every user install it locally.

Windows Azure Caching (Preview) ErrorCode<ERRCA0017>:SubStatus<ES0006>:

I'm using the role-based caching feature for a windows azure web role.
Configured as co-located. I've followed the steps given by windows azure docs for caching (preview). I get the following error:
ErrorCode <ERRCA0017>:SubStatus<ES0006>:There is a temporary failure.
Please retry later. (One or more specified cache servers are
unavailable, which could be caused by busy network or servers. For
on-premises cache clusters, also verify the following conditions.
Ensure that security permission has been granted for this client
account, and check that the AppFabric Caching Service is allowed
through the firewall on all cache hosts. Also the MaxBufferSize on the
server must be greater than or equal to the serialized object size
sent from the client.). Additional Information : The client was trying
to communicate with the server: net.tcp://127.255.0.4:20010/.
I'm running everything as localhost, using the local development storage, my cache client is in the same role as the server. Changed many configuration attributes, but I always get that excpection or similar like "cannot connect to tcp....".
I'd appreciate some help. Thanks.
There are couple of things which could go wrong with your application.
Very first thing to make sure that you have SDK 1.7 in your machine even with Windows Azure Caching Services and then verify that you have reference set from Windows Azure Cache (not from Windows Server App Fabric SDK). I have seen such misconfiguration in past which lead to such errors.
Now have you changed your dataCacheClient, identifier to your ROLE Name as described in the documentation link here. If you follow the documentation as described to you should not hit any error so for the sake of checking what could be wrong, you can create exact same application as described in this link and see if that works or not.
To get more details error, please be sure to increase the DataCacheFactoryConfiguration.ChannelOpenTimeout value to longer i.e. 2 minutes then default 20 seconds as described here. This step will help you to get details about inner exception which may lead to actual root cause to your problem.
We use Azure co-located caching (not in preview anymore) as our session backer and have fairly regular outages. About once a month.
We tried using the Enterprise library Transient Fault Handling but our instances still hang when caching experiences problems. I think that the transient fault code would work for data caching, but for session backing there is some activity closer to the metal that we can't seem to code against.
The error codes have become more informative over the last year and go something like...
ErrorCode:SubStatus:The request timed out..
Additional Information : The client was trying to communicate with the
server: net.tcp://10.xx.xxx.xx:xxxxx/.
Our best guess so far from experimenting and MS support is that each, or at least one co-located cache role/instance needs to know about all the other instance's IPs, since Azure can destroy and re-up instances whenever they want, this sometimes fails to update the dependent instances. This is secret sauce for Azure, but it is not a secret when our site goes down. I'm looking for any more information on this and to see how others are working around this issue.
One possible work-around. One of our talented platform administrators found that resetting IIS on the instances and scaling up two more instances seem to help the problem. This makes sense to me because it gives caching another chance to gather all the required info about the other instances. This is NOT CONFIRMED to solve the problem but if we repeat this during the next outage it could be a valid work around.

Resources