Can Kofax client connect to different servers? - client-server

We have Kofax Capture 9 setup in 2 different departments, on the same network. One of the departments is planning on helping the other with a project. Is it possible for one department's Kofax scan stations to connect to the other departments Kofax server to provide help with scanning and validation? Can it be done in a way that will allow them to easily switch between servers?

If i understand your problem correctly, your KC server setup is like this :
KC Server 1 (Department 1)
Client 1(Department 1)
Client 2(Department 1)
Client 3(Department 1)
KC Server 2 (Department 2)
Client 1(Department 2)
Client 2(Department 2)
Client 3(Department 2)
Now, you want the Client's of KC Server 1 to become the clients of KC Server 2.
If this is the Case, it can be done by editing the "ServerPath" registry key #:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Kofax Image Products\Ascent Capture\3.0
Open the Registry Editor on each of the clients of KC Server 1 and change the "ServerPath" keys value to CaptureSV path of KC Server 2 and then restart the kofax Capture services. JOB DONE !!
To again move the clients back to the KC Server 1, repeat the same step and set the CaptureSV path of KC Server 1.

You can redirect the clients to a different license server as well :
32-bit: HKEY_LOCAL_MACHINE\SOFTWARE\Kofax\SALicClient\
64-bit: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Kofax\SALicClient\

You can use the ACDeployUtil to change the ServerPath with the following command:
ACDeployUtil /ServerPath:"\\Machine Name\CaptureSV"
Run the ACDeployUtil to know the current configuration of the client/server.

Though it is essentially true that you can change the ServerPath key to redirect a client, it isn't officially supported to do this. The problem you will run into is with licensing. If you direct the client to an environment that has higher scan volume remaining then the volume will be reduced to the last value known by the client.
Depending on how similar your volume counts are, this might not be immediately noticeable, but if you go back and forth it will keep lowering both licenses to the lowest number it has seen. The only way to resolve this will be to reset the license by working with both sales and tech support.
The only way to avoid this situation is if you actually have a single license server used by both environments. Of course this depends on whether you actually have a single license or multiple.

Related

Replica database is not accessible in Read Scale Availability Group in SQL Server 2017 Standard edition

I'm looking into ways of replicating databases from On-Premise environments to Azure and one of the options I found was setting up a Read-Scale availability group.
The reason I'm using a Read-Scale and not an Always On availability group, is because I don't won't to use SQL Server Enterprise edition due to the cost.
I followed a tutorial from Microsoft (MS TUTORIAL) to set this all up and in the end, I think I got it working as my database appeared on the Azure environment.
However, the problem is that my replica always stays in the Synchronizing state - which is probably due to the fact I chose Asynchronous Replication by using the AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT command - but even worse is that I can't access the database itself.
Each time I try to fire a query against it, it comes back with an Object is not accessible exception.
After some reading, I found that the cause of this might be because my replica doesn't have a secondary role. Trying to set this via the ... SECONDARY_ROLE({ALLOW_CONNECTIONS = ALL})... command, clearly states that this feature is not available in the Standard version of SQL Server.
My whole confusion comes from the fact that on the Microsoft documentation (MS DOCS), it says that With availability groups, one or more secondary replicas can be configured to support read-only access to secondary databases. which is exactly what I'm not succeeding in.
Did anybody have the same issue, or knows how to configure the Read-Scale availability group on SQL Server Standard so my second replica is accessible and readable as well?
P.S. I did look at the actual SQL Replication with Transaction Replication, but there are quite a bit of moving parts there, so I'm exploring all options before making a decision.
Based on a twitter conversation I found out that you will need to create a snapshot of the database in Secondary Replica in order to read from there.
Please read this tweeter thread.
I also added a suggestion in the feedback channel to fix the documentation.

nca_connect_server: cannot communicate with host in LoadRunner 12.53

I'm currently testing HP LoadRunner 12.53 with Oracle EBS R12.2.5.
I created a simple script using both Oracle Apps, and Oracle NCA + Http protocol (Log in, bring up a form and close/log out) and replayed but run into below error. (same error for both scripts)
nca_connect_server: cannot communicate with host
icx_ticket is correlated and works OK as it is picked and replaced in the parameter.
No need to correlate JSessionIDForms as EBS is running on socket mode.
It is s just simple script with single correlation but can't find any clue for the error.
What could be the root cause of the error?
Where should I look at for a clue? How to make the error / log more verbose and detailed
Thanks in advance
Record it twice. If the value shifts, then correlate it.
Please ensure that you have properly set up the environment before recording, Below steps need to be taken for setting up the environment
1.Set "record = names" flag for specific user profile in Oracle EBS Application** via administrator login (search google how to achieve this or simply ask your application team to do it for you)
2.Run Time Settings and Default.cfg file changes
Run Time Settings
Keep the below values to high limit to avoid replay timeout errors
Run-Time Settings > Internet Protocol >Preferences > Options>
Step Download Limit
HTTP-request connect timeout:
HTTP-receive receive timeout:
Keep-Alive timeout:
Run-Time Settings >Browser>Browser Emulation>
1) Simulate a new user on each iteration – checked
3. default.cfg file inside script directory
"RelativeURL={NCAJServSessionId}" statement in the default.cfg file rolls back each time we run the script so we need to check that it is
/forms/lservlet;JsessionIDForms={NCAJServSessionId} -- R12 Version or
/forms/formservlet?JServSessionIdforms={NCAJServSessionId} -- EBS 11
i Version
4. Correlation - Last but not the least
Ensure correct correlation of each and every parameter, The best way to achieve this is by recording the script 2 times and comparing them using a suitable tool, Correlate each value which might be changing each time you replay the script
Note :- The Oracle EBS is not fully supported by Loadrunner please download the loadrunner compatability matrix and check if your version is supported by Loadrunner.

Where does Redis store the data

I am using redis for pub/sub as well as for server side cache. I mean my app server has redis server running as one process (functioning as a cache as well) . I have several thin clients (running redis client) connected to this app server in pub/sub mode. I would like to know where redis stores the cache data ? in server alone or there will be a copy in the clients as well. Also is it a good idea to use Redis in this fashion if there are close to 100 redis clients connected to server through pub/sub channel.
Thanks
Redis is a (sort of) in-memory noSQL database; but I found that my copy (running on linux) dumps to /var/lib/redis/dump.rdb
Redis can manage really big numbers of connections, by default its in-memory store (thanks to storing stuff in RAM it can be so fast).
But in the same time it can be configured as a persistent store, so dumping cached data (every x time or every x updated keys) to disk.
So it can be configured depending on your needs, have a look here.
All the cache data will be stored in the memory of the server provided to the config of running redis server.
The clients do not hold any data, they only access the data stored by the redis server.
I just installed redis on mac via homebrew. Without any configuration, I
found the dump.rdb is in my working directory (where I launched redis-server).
You can figure that out with the config command.
redis-cli config get dir
However as far as I know pub/sub data is volatile and not stored nor cached in redis at all. If you need that, you should look for a dedicated message broker like for example RabbitMQ.
On my Ubuntu, it was at /var/lib/redis/dump.rdb. On my macOS (installed via brew), it was at /usr/local/var/db/redis/dump.rdb.
Default location
/var/lib/redis/
Redis save all data in memory of server and rarely save date to disk.
For server<>client flow - all data transport with server.
Redis can processing number of clients ... default limit - 10.000
If you need less .. you must reconfigure OS, Server Settings etc. - http://redis.io/topics/clients
As I understood about the question your concern is about the Radis server memory and the client (application) memory.
I would like to know where redis stores the cache data ? in server alone or there will be a copy in the clients as well.
The Redis 6's client-side caching is what you actually looking for. There server and application both stores copies an keep in sync through a protocol communication. Eventhough they have implemented few ways to accomplish it following example (picked from the docs) mechanism will help you to understand it well.
Client 1 -> Server: CLIENT TRACKING ON
Client 1 -> Server: GET foo
(The server remembers that Client 1 may have the key "foo" cached)
(Client 1 may remember the value of "foo" inside its local memory)
Client 2 -> Server: SET foo SomeOtherValue
Server -> Client 1: INVALIDATE "foo"
Hope this helps. See that nice docs for more elaboration.

Multiple iDempiere instances in one server

I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.

DB job to generate/email Oracle report output

The task is to have an Oracle report generated daily, automatically, and e-mailed to a user.
So I've sort of got this working (it works if I hardcode one of the reports server names below).
I created a job on the database that will generate the report. I'm able to get the report to email as a PDF to the destination with this command:
UTL_HTTP.REQUEST('http://server/reports/rwservlet?server=specific_report_server &report='||p_report_name||'&userid='||p_connstring||'&destype=mail'||p_parameters||'&desname='||p_to_recipientlist||' &cc='||p_cc_recipientlist||'&bcc='||p_bcc_recipientlist||'&subject=%22' || REPLACE(p_subject,' ','%20') || '%22&paramform=no&DESformat=pdf&ENVID='||p_envid);
That works great...
The problem however is that my organization has two report servers that are load balanced. Our server team could take down one of the servers without really any warning, so I can't just hardcode the report server name (the ?server= parameter above) with one of the report server names because it will work for a while, then when that server goes down, it will stop working.
My server team asked me to look for a way to pull the server from the formsweb.cfg file or from default.env value within the job (there are parameters in there that hold the server name). The idea there is that the "http://server" piece will direct the report to be run on the appropriate server, and the first part of the job could get the reports server name from the config file that the report is run on. I'm not sure if this is possible from the database level, or how to do this. Any ideas?
Is there a better way that this can be done, perhaps?
If there are two load-balanced servers, that strongly implies that the network folks must have configured some sort of virtual IP (VIP) for the service. You (and everyone else) should be using that VIP rather than a specific server name.
For example, if you have two servers reportA.yourdomain.com and reportB.yourdomain.com, you would almost certainly create a VIP for reports.yourdomain.com that load balances between the two servers (and knows whether one of the servers is down or whether a new reportC server has been added). This VIP would either do the load balancing itself or would point to an actual physical load balancer that distributes the traffic. All applications would reference the reports.yourdomain.com VIP rather than any hard-coded server names.

Resources