I have a scenario where the OPCUA client have no idea about the construction of the OPCUA server address space, still the client knows the connection credentials. Can the OPCUA client still access the data from the server without the namespace and node id.
I have been getting the server data by specifying the namespace and node id during the client request.
Can anyone help me knowing the OPC UA data access in detail?
TL;DR; Yes, you can use the Browse Service to get a list of all nodes within the server.
More Detailed answer:
Every Server should at least have the following nodes (folders). In Parenthesis is the node ID in namespace 0 (the OPC UA base namespace) which is given by the specification.
- Root (i=84)
- Objects (i=85)
- Types (i=86)
- Views (i=87)
The OPC UA Specification Part 4 can be downloaded for free after registering from here OPC UA Specification. It defines the Services Browse and BrowseNext. Using these services, you can indicate a start node (i.e., one of the nodes above, e.g., Root = namespace 0, ID 84) and get all its children.
In node-opcua you can find some example code probably in here: https://github.com/node-opcua/node-opcua/blob/fd5e48bac996625aaa7c177d1f8ed0c40ee92fbc/test/end_to_end/u_test_e2e_BrowseRequest.js
In open62541 the example for browsing nodes is shown here:
https://github.com/open62541/open62541/blob/master/examples/client.c#L55
Related
I have a cluster of Elasticsearch nodes running on different AWS EC2 instances. They internally connect via a network within AWS, so their network and discovery addresses are set up within this internal network. I want to use the python elasticsearch library to connect to these nodes from the outside. The EC2 instances have static public IP addresses attached, and the elastic instances allow https connections from anywhere. The connection works fine, i.e. I can connect to the instances via browser and via the Python elasticsearch library. However, I now want to set up sniffing, so I set up my Python code as follows:
self.es = Elasticsearch([f'https://{elastic_host}:{elastic_port}' for elastic_host in elastic_hosts],
sniff_on_start=True, sniff_on_connection_fail=True, sniffer_timeout=60,
sniff_timeout=10, ca_certs=ca_location, verify_certs=True,
http_auth=(elastic_user, elastic_password))
If I remove the sniffing parameters, I can connect to the instances just fine. However, with sniffing, I immediately get elastic_transport.SniffingError: No viable nodes were discovered on the initial sniff attempt upon startup.
http.publish_host in the elasticsearch.yml configuration is set to the public IP address of my EC2 machines, and the /_nodes/_all/http endpoint returns the public IPs as the publish_address (i.e. x.x.x.x:9200).
We have localized this problem to the elasticsearch-py library after further testing with our other microservices, which could perform sniffing with no problem.
After testing with our other microservices, we found out that this problem was related to the elasticsearch-py library rather than our elasticsearch configuration, as our other microservice, which is golang based, could perform sniffing with no problem.
After further investigation we linked the problem to this open issue on the elasticsearch-py library: https://github.com/elastic/elasticsearch-py/issues/2005.
The problem is that the authorization headers are not properly passed to the request made to Elasticsearch to discover the nodes. To my knowledge, there is currently no fix that does not involve altering the library itself. However, the error message is clearly misleading.
Both proxy server and anonymizer working as a mediator between your ip and target website.
As per my understanding -> In anonymizer, specially in "Network
anonymizer" it pass the request from all the computer connected to
same network so it will complex to trace your ip.
Is my above understanding is correct?
What is the main difference or they are same?
I found this article
So, while the proxy server is the actual server that passes your information, anonymizer is just the name given to the online service per se. An anonymizer uses underlying proxy servers to be able to give you anonymity.
I am Using Atmosphere Framework web socket as my first preference for transmission fall back goes to long pooling ,
used runtime-native as atmosphere dependence for maven tool
Tomcat-v8 as a server
I could like to receive the broadcast message in Java Code so I refer to the following Links
http://blog.javaforge.net/post/32659151672/atmosphere-per-session-broadcaster
Broadcast to only one client with Atmosphere
https://github.com/Atmosphere/atmosphere
https://atmosphere.java.net/atmosphere_whitepaper.pdf
From the above links and chart samples I build the project successfully but I could like to broadcast from client to server both are JAVA Language.
Also I wrote a BroadcastFactory as
Server:
BroadcasterFactory.getDefault().lookup("URL to broadcast", true).scheduleFixedBroadcast(message, 2, TimeUnit.SECONDS);
Client
AtmosphereRequest request = atmosphereResource.getRequest();
String IncomingMessage = request.getReader().readLine();
Here while I put debug mode I got NULL as value,this made me to ask question that whether I am doing wrong or Framework doesn't support.
FYI:
I used this Link
https://github.com/Atmosphere/atmosphere/wiki/Creating-private-channel-of-communication-between-Browsers
I con't get the line
privateChannel
.addAtmosphereResource(atmosphereResource_browser1)
.addAtmosphereResource(atmosphereResource_browser2);
atmosphereResource_browser means its define browser name?
please suggest me how to proceed more,sharing of Links or video will be helpfull.Thank in Advance
No it's not the browser name but it is the the AtmosphereResource, who represent the browser/user to that Broadcaster.
If you want broadcast to limited clients then you can create privateChannel and then broadcast to that channel.
I am working on some legacy code on Windows for a desktop app in "C.
The client needs to know the geo-location of the user who is running the application.
I have the geo-location code all working (using MaxMind: http://dev.maxmind.com/).
But now I'm looking for help in getting their external IP.
From all the discussions on this topic throughout SO and elsewhere it seems that there is a way to do this by connecting to a "reliable" host (server) and then doing some kind of lookup. I'm not too savvy on WinSock but this is the technology that may be the simplest to use.
Another option is to use WinHttpConnect technology.
Both have "C" interfaces.
Thank you for your support and suggestions.
You can write a simple web service that checks the IP address(es) that the program presents when connecting to that web service.
Look at http://whatismyip.com for an example.
Note that multiple addresses can be presented by the HTTP protocol if there are proxy servers along the route.
You can design your simple web service to get the IP of the client. See
How do I get the caller's IP address in a WebMethod?
and then return that address back to the caller.
Note that in about 15% of cases (my experience metric) the geo location will be way off. The classic example is that most AOL users are routed through a small number of proxy servers. However, there are many other cases where the public IP does not match the user's actual location. Additionally, Geo IP databases are sometimes just wrong.
Edit
It is not possible to detect your external IP address using only in-browser code.
The WebSocket has no provision to expose your external IP address.
https://www.rfc-editor.org/rfc/rfc6455
You need an outside server to tell you what IP it sees.
I am doing a JMS connection using Java. The command I am using to establish connection is
QueueConnectionFactory factory =
new com.tibco.tibjms.TibjmsQueueConnectionFactory(JMSserverUrl);
Where JMSServerUrl is the varible which stores my JMS URL.
Now the problem is that I need to add the fault tolerance URL i.e two different URL's. So can any one tell me how can I specify two URLs together in the above code sample such that if first URL is not accessible it should try connecting to the other URL.
Put all URLs in a single string with a comma between them.
new TibjmsQueueConnectionFactory("ssl://host01:20302,ssl://host02:20302");
Caution, I am a Tibco EMS newbie, but this seems to work, as evidenced by the error I can get ...
javax.jms.JMSSecurityException: Failed to connect to any server at:
ssl://host01:20302,ssl://host02:20302
[Error: Can not initialize SSL client: no trusted certificates are set:
url that returned this exception = SSL://host01:20302 ]
The .NET documentation for tibco(I know your using java) suggests that you can provide a comma delimited list of server URL's for messaging connections. Bear in mind that I don't have any real tibco experience, but this is a common way to handle initial connection fault tolerance(i.e. prior to establishing a connection and receiving information about the cluster, after which failover is typically handled by the connection). It may be worth a try. Another solution that I have seen to this problem is creating a virtual IP and handling fault tolerance at the Network Level.