Question
How to communicate with another program (for instance, a windows service one) through environment variables (not system or user ones)?
What do we have
Well, I have the following scheme for a data logger:
------------------------- --------------------------------
| the things to measure | | the things that do something |
------------------------- --------------------------------
| ^
| sensors | switches
V |
-------------------------------------------------------------------
| dedicated hardware |
-------------------------------------------------------------------
| ^
| | serial communication
V |
--------------- -------------
| Windows | ------------------------------------> | user |
| service | <------------------------------------ | interface |
--------------- udp communication -------------
|^ keyboard
V| and screen
--------
| user |
--------
On current development:
windows service is always running when Windows is running
user can open and close user interface (of course :p)
windows service acquires data from sensors
user interface automatic requests data to windows service every 100ms and shows it to user via udp communication through some implemented protocol (we call it GetData() command and response to it)
user can send some other commands to change the data to acquire through implemented protocol (we call it SetSensors() command and response to it)
Both user interface and windows service are developed on Borland C+ Builder 6 and use NMUDP component, from FastNet tab, for UDP communication.
What we are thinking to do
Because of some buffer issues and to free udp channel only for sending SetSensors()command and response to it, we are considering that instead of using GetData():
Windows service would get data from sensors and put them on environment variables
the user interface would read them to show to user
Scheme after doing what we are thinking
------------------------- --------------------------------
| the things to measure | | the things that do something |
------------------------- --------------------------------
| ^
| sensors | switches
V |
-------------------------------------------------------------------
| dedicated hardware |
-------------------------------------------------------------------
| ^
| | serial communication
V |
--------------- -------------
| | ------------------------------------> | |
| | environment variables | |
| | (get data from sensors) | |
| Windows | | user |
| service | | interface |
| | | |
| | ------------------------------------> | |
| | <------------------------------------ | |
--------------- udp communication -------------
(send commands to service) |^ keyboard
V| and screen
--------
| user |
--------
Any way to do that?
We would not use system and user environment variables, because it writes on Windows Registry, i.e., it will save to hard drive and it gets more slow...
As #HansPassant said, I cannot do that directly. Although I saw some ways to do that via memory mapped file, it is so easy only to add one more udp communication channel through other port. So:
------------------------- --------------------------------
| the things to measure | | the things that do something |
------------------------- --------------------------------
| ^
| sensors | switches
V |
-------------------------------------------------------------------
| dedicated hardware |
-------------------------------------------------------------------
| ^
| | serial communication
V |
--------------- -------------
| | ------------------------------------> | |
| | udp communication (port 3) | |
| | (get data from sensors) | |
| Windows | | user |
| service | | interface |
| | (port 1) | |
| | ------------------------------------> | |
| | <------------------------------------ | |
--------------- udp communication (port 2) -------------
(send commands to service) |^ keyboard
V| and screen
--------
| user |
--------
If someone provide a better solution, I'll mark it as solution in future.
Related
I have the following table containing users and the devices that they use
+--------+--------+
| UserId | Device |
+--------+--------+
| user1 | PC |
| user1 | TV |
| user2 | TV |
| user2 | Phone |
| user2 | Phone |
| user3 | Phone |
| user4 | PC |
| user5 | Phone |
+--------+--------+
I want to find the percentage of user using a given device. If I use percentOfTotal(count(UserId), [Device]), the result will be as follows:
+--------+----------------+
| Device | Usage rate |
+--------+----------------+
| PC | 25% |
| TV | 25% |
| Phone | 50% |
+--------+----------------+
However, this result is not what I want since a user can use more than one device. In my opinion, the usage rate should be calculate as (count distinct users using the same device) / (count distinct all users), i.e. the result should look like this:
+--------+----------------+
| Device | Usage rate |
+--------+----------------+
| PC | 40% |
| TV | 40% |
| Phone | 60% |
+--------+----------------+
I wonder if I can calculate that using AWS Quicksight
At the moment you can define a measure that returns the number of distinct users for each device but not the total number of distinct users. Once we add ability to get total number of distinct users, you should be able to do everything in QuickSight. We are hoping to add this soon. Current workaround is to make changes in the data prep or use custom SQL to provide number of distinct users in the dataset.
After reading more about how transactions are processed by NEAR I came up with this picture of how a few key parts are related.
I am seeking some pointers on how to correct this.
First a few key points I'm currently aware of, only some of which are illustrated below, are:
an Action must be one of 7 supported operations on the network
CreateAccount to make a new account (for a person, company, contract, car, refrigerator, etc)
DeployContract to deploy a new contract (with its own account)
FunctionCall to invoke a method on a contract (with budget for compute and storage)
Transfer to transfer tokens from one account to another
Stake to express interest in becoming a proof-of-stake validator at the next available opportunity
AddKey to add a key to an existing account (either FullAccess or FunctionCall access)
DeleteKey to delete an existing key from an account
DeleteAccount to delete an account (and transfer balance to a beneficiary account)
a Transaction is a collection of Actions augmented with critical information about their
origin (ie. cryptographically signed by signer)
destination or intention (ie. sent or applied to receiver)
recency (ie. block_hash distance from most recent block is within acceptable limits)
uniqueness (ie. nonce must be unique for a given signer)
a SignedTransaction is a Transaction cryptographically signed by the signer account mentioned above
Receipts are basically what NEAR calls Actions after they pass from outside (untrusted) to inside (trusted) the "boundary of trust" of our network. Having been cryptographically verified as valid, recent and unique, a Receipt is an Action ready for processing on the blockchain.
since, by design, each Account lives on one and only one shard in the system, Receipts are either applied to the shard on which they first appear or are routed across the network to the proper "home shard" for their respective sender and receiver accounts. DeleteKey is an Action that would never need to be routed to more than 1 shard while Transfer would always be routed to more than 1 shard unless both signer and receiver happen to have the same "home shard"
a "finality gadget" is a collection of rules that balances the urgency of maximizing blockchain "liveness" (ie. responsiveness / performance) with the safety needed to minimize the risk of accepting invalid transactions onto the blockchain. One of these rules includes "waiting for a while" before finalizing (or sometimes reversing) transactions -- this amounts to waiting a few minutes for 120 blocks to be processed before confirming that a transaction has been "finalized".
---.
o--------o | o------------------------o o-------------------o
| Action | | | Transaction | | SignedTransaction |
o--------o | | | | |
| | o--------o | | o-------------o |
o--------o | | | Action | signer | | | Transaction | |
| Action | | --> | o--------o receiver | --> | | | | ---.
o--------o | | | Action | block_hash | | | | | |
| | o--------o nonce | | | | | |
o--------o | | | Action | | | | | | |
| Action | | | o--------o | | o-------------o | |
o--------o | o------------------------o o-------------------o |
---' |
|
sent to network |
.---------------------------------------------------------------------------'
| <----------
|
| ---.
| XXX o--------o o---------o |
| XX | Action | --> | Receipt | |
| o--------------------------------o o--------o o---------o |
| | | |
| | 1. Validation (block_hash) | o--------o o---------o |
'--> | 2. Verification (signer keys) | | Action | --> | Receipt | | --.
| 3. Routing (receiver) | o--------o o---------o | |
| | | |
o--------------------------------o o--------o o---------o | |
transaction arrives XX | Action | --> | Receipt | | |
XXX o--------o o---------o | |
---' |
|
applied locally OR propagated to other shards |
.---------------------------------------------------------------------------'
| <----------
|
|
| --. .-------. .--. .--. .--. o-----------o
| o---------o | | | | | | | | | | |
'--> | Receipt | | Shard | | | | | | | | | |
o---------o | A | | | | | | | | | |
| --' | | | | | | | | | |
| | | | | | | | | | |
| --. | | | | | | | | | Block |
| o---------o | | Block | | | | | o o o | | | (i) |
'--> | Receipt | | | (i) | | | | | | | | finalized |
o---------o | | | | | | | | | | |
| | Shard | | | | | | | | | |
| o---------o | B | | | | | | | | | |
'--> | Receipt | | | | | | | | | | | |
o---------o | | | | | | | | | | |
--' '-------' '--' '--' '--' o-----------o
| |
'------------------------------------------------'
about 3 blocks to finality
It's unclear to me what you mean by "routed to more than one shard". A receipt can only be routed to one shard. Also I don't understand your description of finality gadget, and I don't know where you get "120 blocks" from. Normally you just need to wait for 3 blocks for a block to be finalized.
Great explanation! Core protocol devs should complete that picture and include in the low-level documentation!
There's some corrections. A Transaction with all its actions gets converted to a single Receipt. Receipts can have several actions too. Every receipt goes to a single specific shard/receiver account. In the case of a "Transfer" action inside a Transaction/Receipt, it can generate new receipts to complete the transfer:
e.g. Alice sends 100N to Bob
Receipt 1, action Transfer: acting on Alice's account. Alice's account gets 100N deducted. If that succeeds a 2nd Receipt is created:
Receipt 2- single action: act on Bob's account to "increase balance by 100N". This second receipt gets "published" to be routed to Bob's shard.
if the 2nd receipt fails (no Bob account) a 3rd Receipt is created to refund 100N to Alice. This 3rd Receipt is again published to be routed back to Alice's shard.
So every receipt (can have more than one action) but is directed to a single specific account and then a single shard.
.- At least this is what I understand 'til now -.
I'm reading the code Sherif, more details:
Even if a Transaction has more than one action, each transaction is converted to a single receipt. A Receipt can have more than one action, but a single ´receiver´.
All Receipts are validated. When routed to other shards (if the ´receiver´ account is not in the current shard) the receiving node will re-validate the receipt before processing. So there's no trusted/untrusted boundary. Everything gets re-validated in the nodes before processing.
All local receipts are processed first, then delayed receipts are checked (waiting for data), and then receipts received from other nodes are processed.
Some Recepits can be "Data Receipts", containing chunks of data required to execute other receipts. It's like sending input data for actions in chunks to other nodes. When all the data chunks are received the related "Action Receipt" is executed.
When an "Action Receipts" has all it's data, every action inside the receipt is executed: code
and code
There's a loop for every action in the receipt, and the action is applied to the receiver account.
.-to be continued-.
"Receipts are either applied to the shard on which they first appear or are routed across the network to the proper "home shard" for their respective sender and receiver accounts."
So here is my understanding; AccountID sends a transaction to the shard they are on e.g. assigned to for the given epoch since every epoch there is a reshuffling of accounts across shards. The shard (set of AccountIDs of validators etc.) verifies the transaction. If the receiver is on another shard, a receipt is created and routed to the other shard.
While the transaction from the sender can be included in the next block, it will take up to three blocks to validate it and finalize the routing to the receiver shard.
We have a web-application hosted on Azure and it sends Telemetry to App Insights and the Dev team is asking if it is ok to Turn off sending the SESSION/KEEPALIVE data thats being posted from web-application. Will this affect any functionality like User Flows etc in Application Insights?
Any guidance on this?
Following is sample data:-
timestamp | id | source | name | url | success | resultCode | duration | performanceBucket
-- | -- | -- | -- | -- | -- | -- | -- | --
2019-09-25T16:00:31.8191577Z | \|Ac34D.9fIx+.4c3e0b35_ | POST session/keepalive | http://XXXXXXXXXXXXXX.com/session/keepalive | TRUE | 200 | 15.8274 | <250ms
2019-09-25T16:00:42.7423811Z | \|Ac34D.FqSNy.83ee6e0d_ | POST session/keepalive | http://XXXXXXXXXXXXXX.com/session/keepalive | TRUE | 200 | 38.3679 | <250ms
2019-09-25T16:00:48.716939Z | \|Ac34D.h8kwN.34c0b012_ | POST session/keepalive | http://XXXXXXXXXXXXXX.com/session/keepalive | TRUE | 200 | 16.0359 | <250ms
2019-09-25T16:00:54.1607213Z | \|Ac34D.v2qfF.4c3e0b36_ | POST session/keepalive | http://XXXXXXXXXXXXXX.com/session/keepalive | TRUE | 200 | 15.2518 | <250ms
Views in Applications Insights typically target a specific set of telemetry item types.
For instance, user flows UI leverages PageView and CustomEvent telemetry types. Therefore, if keep alive is reported as one of those types it will be displayed in that UI.
However, if the example above is Dependency telemetry, then that view won't be affected.
In general, if you'd like to drop some of the telemetry before it reaches AI and is processed for storage, you'd use TelemetryProcessor (in case of Java Script SDK, TelemetryInitializer) to filter it out:
var telemetryInitializer = (envelope) => {
if (envelope.data.someField == 'keepalive') return false;
};
appInsights.addTelemetryInitializer(telemetryInitializer);
Database: Oracle 12c
I want to take single partition, or a set of partitions, disconnect it from a Table, or set of tables on DB1 and move it to another table on another database. I would like to avoid doing DML to do this for performance reasons (It needs to be fast).
Each Partition will contain between three and four hundred million records.
Each Partition will be broken up into approximately 300 Sub-Partitions.
The task will need to be automated.
Some thoughts I had:
Somehow put each partition in it's own datafile upon creation, then detaching from the source and attaching it to the destination?
Extract the whole partition (not record-by-record)
Any other non-DML Solutions are also welcom
Example (Move Part#33 from both to DB#2, preferably with a single, operation):
__________________ __________________
| DB#1 | | DB#2 |
|------------------| |------------------|
|Table1 | |Table1 |
| Part#1 | | Part#1 |
| ... | | ... |
| Part#33 | ----> | Part#32 |
| Subpart#1 | | |
| ... | | |
| Subpart#300 | | |
|------------------| |------------------|
|Table2 | |Table2 |
| Part#1 | | Part#1 |
| ... | | ... |
| Part#33 | ----> | Part#32 |
| Subpart#1 | | |
| ... | | |
| Subpart#300 | | |
|__________________| |__________________|
Please read the document below with all the examples of exchanging partitions of table.
https://oracle-base.com/articles/misc/partitioning-an-existing-table-using-exchange-partition
I am in the process of moving and testing a development site on the actual domain name now and I just wanted to check if I was missing anything and also get some advice.
It is a Magento 1.8.1 install from Turnkey Linux running on an m1.medium instance.
What I have done (so far) is, create an image of the development instance, made a new account and copied it over to there. I then made an elastic IP and associated it with the new instance. Next I pointed the A name record of the production domain to the elastic IP.
Now, if I go to the production domain I get redirected to the development domain. Is there a reason for this?
Ideally I would like to have two instances, one dev one that is off unless needed and of course the production on which is going to be live 24/7. However if I turn the development domain off it stops the other too.
I have a feeling it's just because I need to change instances of the dev domain in the Magento database / back-end however I wanted to get a more knowledgable answer as I don't want to break either of the instance.
Also, I should probably mention that the development domain is a subdomain i.e. shop.mysite.com and the live one is just normal i.e. mysite.com. Not entirely sure this is relevant but thought it worth a mention.
Thanks in advance for any help.
The reason your URL on your new instance is getting redirected to the old URL is because in the core_config_data table of your magento database the web/unsecure/base_url and web/secure/base_url paths point to your old URL.
So if you are using mysql you can query your database as follows:
mysql> use magento;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select * from core_config_data;
+-----------+---------+----------+-------------------------------+-------------------------------------+
| config_id | scope | scope_id | path | value |
+-----------+---------+----------+-------------------------------+-------------------------------------+
| 1 | default | 0 | web/seo/use_rewrites | 1 |
| 2 | default | 0 | admin/dashboard/enable_charts | 0 |
| 3 | default | 0 | web/unsecure/base_url | http://magento.myolddomain.com/ |
| 4 | default | 0 | web/secure/use_in_frontend | 1 |
| 5 | default | 0 | web/secure/base_url | https://magento.myolddomain.com/ |
| 6 | default | 0 | web/secure/use_in_adminhtml | 1 |
| 7 | default | 0 | general/locale/code | en_US |
| 8 | default | 0 | general/locale/timezone | Europe/London |
| 9 | default | 0 | currency/options/base | USD |
| 10 | default | 0 | currency/options/default | USD |
| 11 | default | 0 | currency/options/allow | USD |
| 12 | default | 0 | general/region/display_all | 1 |
| 13 | default | 0 | general/region/state_required | AT,CA,CH,DE,EE,ES,FI,FR,LT,LV,RO,US |
| 14 | default | 0 | catalog/category/root_id | 2 |
+-----------+---------+----------+-------------------------------+-------------------------------------+
14 rows in set (0.00 sec)
and you can change it as follows:
mysql> update core_config_data set value='http://magento.mynewdomain.com' where path='web/unsecure/base_url';
mysql> update core_config_data set value='https://magento.mynewdomain.com' where path='web/secure/base_url';