How to redeploy network? - hyperledger-composer

I have a network version where I fixed a small bug in the .js file and added a function. I would like to redeploy the network (on the same version).
I stop/teardown Fabric and restart it. Delete the card and .bna file, then re-create the card and .bna file. After that I install and start the network. Last step is to start the REST server.
Even after all these steps, the REST server does not list my new function, indicating it has not been updated?
Do I have to change the version number if I modify the script.js and model.cto files?

As david_k points out in 'comments' above - you should use composer network upgrade to upgrade the business network (no need to 'teardown' your Fabric environment) as well as stop the REST server as you've done. See https://hyperledger.github.io/composer/latest/reference/composer.network.upgrade.html and example of it in use in the tutorials https://hyperledger.github.io/composer/latest/tutorials/queries . Once you've upgraded your business network successfully, and pinged it successfully, you can stop/remove the old dev-* business network containers as indicated. You would then start the REST server again, use the same business network card (eg. an admin card) when prompted / as a parameter to the start command. Then in a new browser session, you can test your REST APIs (or as suits). If you're not seeing the new function (or it errors), you should check your decorators/naming in your logic.js file to see the right transaction function is being called for a named transaction.

Related

Unable to build extended domain with WLS, Forms, Reports: expected directories not created

I am in the middle of building a 4-node application layer using WLS, Oracle Forms and Oracle Reports. I have built an ADMIN node and successfully built the RCU and have run config.sh.
I fully defined the entire domain (all 4 nodes) while running config.sh. I have copied and moved the domain definition to the 2nd node using pack & unpack.
When I attempt to install and build on the 2nd node (ADMIN does not run here), and run Forms and/or Reports for the first time, many directories are automatically created.
But some I expect to be created are missing.
For example:
$DOMAIN_HOME/config/fmwconfig/components/FORMS/instances/forms2/server/
did not get created.
What step did I miss here that results in some of the necessary directories not being created?
This is because the FORMS SystemComponents are not installed in the new instance locations under <domain_name>/config/fmwconfig/components/FORMS/<instance> and the FORMS components are not carried across by the pack command.
Re-running the config wizard will allow you to install the components on the new instances.
Alternatively, the instance definitions can be added on the Admin Server with WLST in offline mode only:
readDomain('<$DOMAIN_HOME>')
print('Create FORMS SystemComponent '+'forms2')
cd('/')
create('forms2', 'SystemComponent')
cd('/SystemComponent/'+'forms2')
cmo.setComponentType('FORMS')
set('Machine', machineName)
updateDomain()
closeDomain()
The above only works if the Managed Server shares the domain's filesystem with the Admin Server.
Also see:
https://github.com/galiacheng/oracle-forms-reports-weblogic-on-azure#create-managed-servers-and-forms-component

Multiple iDempiere instances in one server

I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.

Installed Zone Alarm on Amazon EC2 Windows Instance and cannot access now. How do I fix this?

I messed up this.
Installed ZoneMinder and now I cannot connect to my VPS via Remote Desktop, it must probably have blocked connections. Didnt know it will start blocking right away and let me configure it before.
How can I solve this?
Note: My answer is under the assumption this is a Windows instance due to the use of 'Remote Desktop', even though ZoneMinder is primarily Linux-based.
Short answer is you probably can't and will likely be forced to terminate the instance.
But at the very least you can take a snapshot of the hard drive (EBS volume) attached to the machine, so you don't lose any data or configuration settings.
Without network connectivity your server can't be accessed at all, and unless you've installed other services on the machine that are still accessible (e.g. ssh, telnet) that could be used to reverse the firewall settings, you can't make any changes.
I would attempt the following in this order (although they're longshots):
Restart your instance using the AWS Console (maybe the firewall won't be enabled by default on reboot and you'll be able to connect).
If this doesn't work (which it shouldn't), you're going to need to stop your crippled instance, detach the volume, spin up another ec2 instance running Windows, and attach the old volume to the new instance.
Here's the procedure with screenshots of the exact steps, except your specific steps to disable the new firewall will be different.
After this is done, you need to find instructions on manually uninstalling your new firewall -
Take a snapshot of the EBS volume attached to it to preserve your data (essentially the C:), this appears on the EC2 console page under the 'volumes' menu item. This way you don't lose any data at least.
Start another Windows EC2 instance, and attach the EBS volume from the old one to this one. RDP into the new instance and attempt to manually uninstall the firewall.
At a minimum at this point you should be able to recover your files and service settings very easily into the new instance, which is the approach I would expect you to have more success with.

MVC3 site deployed on IIS6 stops working after 20 minuttes with 404 Not Found

I'll try to make this short, feel free to ask for more details.
A mobile edition a a web-site has been created using MV3 razor and deployed to an IIS6 web-server using extenstionless URL's. Since .NET4 is installed on the server there is no special configuration done on the server to get extensionless urls work. When I try to access the site with the URL: http://site/m/ i get a 404 Not Found error.
What I do to produce this problem:
Right-click on project in VS2010 and publish to local file system.
ZIP all files in and transfer to production server + unzip there
Right click on production web-site and add a virtual directory for the new application
Create a new application pool with all default settings
Put the new virtual directory/application in that application pool
Try to access the URL in the browser; receive 404 Not Found
The thing that puzzles me, is that if I replace Step 1 with "File->Create New MVC3 Project" and then publish to local file system everything works fine:
The test-project is displayed in the browser with the name i used http://site/mvctest/
I do not need to use any extensions
It does not stop working after 20 minutes (see next paragraph)
And now for the (even) weirder part:
If I now move the "m" application into the application pool just created for the "mvctest" application; it works too. But only for 20 minutes (or whatever value I have set for "Shutdown worker process after being idle for").
Any ideas?
EDIT: If I add wildcard mapping to the /m/ virtual directory it works, but that should/could also affect performance in a bad way?
it sounds like your first scenario the handler isn't setup to handle the mvc requests. IIS 6 needs to be integrated or an extension for MVC mapped.
Set the app pool up to run in integrated pipeline mode. What happens then? This should work. Also check the event log for rapid fail protection kicking in because of worker process resets.

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

Resources