Advisable to remove the Session/Keepalive log data coming from Applications to Azure App Insights - session

We have a web-application hosted on Azure and it sends Telemetry to App Insights and the Dev team is asking if it is ok to Turn off sending the SESSION/KEEPALIVE data thats being posted from web-application. Will this affect any functionality like User Flows etc in Application Insights?
Any guidance on this?
Following is sample data:-
timestamp | id | source | name | url | success | resultCode | duration | performanceBucket
-- | -- | -- | -- | -- | -- | -- | -- | --
2019-09-25T16:00:31.8191577Z | \|Ac34D.9fIx+.4c3e0b35_ | POST session/keepalive | http://XXXXXXXXXXXXXX.com/session/keepalive | TRUE | 200 | 15.8274 | <250ms
2019-09-25T16:00:42.7423811Z | \|Ac34D.FqSNy.83ee6e0d_ | POST session/keepalive | http://XXXXXXXXXXXXXX.com/session/keepalive | TRUE | 200 | 38.3679 | <250ms
2019-09-25T16:00:48.716939Z | \|Ac34D.h8kwN.34c0b012_ | POST session/keepalive | http://XXXXXXXXXXXXXX.com/session/keepalive | TRUE | 200 | 16.0359 | <250ms
2019-09-25T16:00:54.1607213Z | \|Ac34D.v2qfF.4c3e0b36_ | POST session/keepalive | http://XXXXXXXXXXXXXX.com/session/keepalive | TRUE | 200 | 15.2518 | <250ms

Views in Applications Insights typically target a specific set of telemetry item types.
For instance, user flows UI leverages PageView and CustomEvent telemetry types. Therefore, if keep alive is reported as one of those types it will be displayed in that UI.
However, if the example above is Dependency telemetry, then that view won't be affected.
In general, if you'd like to drop some of the telemetry before it reaches AI and is processed for storage, you'd use TelemetryProcessor (in case of Java Script SDK, TelemetryInitializer) to filter it out:
var telemetryInitializer = (envelope) => {
if (envelope.data.someField == 'keepalive') return false;
};
appInsights.addTelemetryInitializer(telemetryInitializer);

Related

SOLVED Laravel website returns "Error 404" for GET requests

So I got a template of a Flutter app that retrieves all its data from a website using HTTP get requests.
I have the following method that gets the list of resturaunts:
Future<Stream<Restaurant>> getNearRestaurants(LocationData myLocation, LocationData areaLocation) async {
String _nearParams = '';
String _orderLimitParam = '';
if (myLocation != null && areaLocation != null) {
_orderLimitParam = 'orderBy=area&limit=5';
_nearParams = '&myLon=${myLocation.longitude}&myLat=${myLocation.latitude}&areaLon=${areaLocation.longitude}&areaLat=${areaLocation.latitude}';
}
final String url = '${GlobalConfiguration().getString('api_base_url')}restaurants?$_nearParams&$_orderLimitParam';
final client = new http.Client();
final streamedRest = await client.send(http.Request('get', Uri.parse(url)));
return streamedRest.stream.transform(utf8.decoder).transform(json.decoder).map((data) => Helper.getData(data)).expand((data) => (data as List)).map((data) {
return Restaurant.fromJSON(data);
});
}
However when I swap the template's url variable for my own website, the app gets stuck and streamRest returns with an error 404 page.
Tried Solutions:
I surrounded it with a try/catch block and it gave me no exceptions.
I also installed postman and checked my website with the GET statement for the same list of restaurants I try to retrieve in the flutter code posted above and see this: Postman GET screenshot
Its as if my website cannot route to the specific pages in my API folder. But they are all defined in api.php.
Update 1:
My web.php looks like this https://pastebin.com/QRG300uL. It seems to be similar to what was suggested below
Update 2:
I ran php artisan route::list and it showed that all the routes seem to be there:
| | POST | api/restaurant_reviews | restaurant_reviews.store | App\Http\Controllers\API\RestaurantReviewAPIController#store | api |
| | GET|HEAD | api/restaurant_reviews | restaurant_reviews.index | App\Http\Controllers\API\RestaurantReviewAPIController#index | api |
| | GET|HEAD | api/restaurant_reviews/create | restaurant_reviews.create | App\Http\Controllers\API\RestaurantReviewAPIController#create | api |
| | DELETE | api/restaurant_reviews/{restaurant_review} | restaurant_reviews.destroy | App\Http\Controllers\API\RestaurantReviewAPIController#destroy | api |
| | GET|HEAD | api/restaurant_reviews/{restaurant_review} | restaurant_reviews.show | App\Http\Controllers\API\RestaurantReviewAPIController#show | api |
| | PUT|PATCH | api/restaurant_reviews/{restaurant_review} | restaurant_reviews.update | App\Http\Controllers\API\RestaurantReviewAPIController#update | api |
| | GET|HEAD | api/restaurant_reviews/{restaurant_review}/edit | restaurant_reviews.edit | App\Http\Controllers\API\RestaurantReviewAPIController#edit | api |
| | GET|HEAD | api/restaurants | restaurants.index | App\Http\Controllers\API\RestaurantAPIController#index | api |
| | POST | api/restaurants | restaurants.store | App\Http\Controllers\API\RestaurantAPIController#store | api |
| | GET|HEAD | api/restaurants/create | restaurants.create | App\Http\Controllers\API\RestaurantAPIController#create | api |
| | GET|HEAD | api/restaurants/{restaurant} | restaurants.show | App\Http\Controllers\API\RestaurantAPIController#show | api |
| | DELETE | api/restaurants/{restaurant} | restaurants.destroy | App\Http\Controllers\API\RestaurantAPIController#destroy | api |
| | PUT|PATCH | api/restaurants/{restaurant} | restaurants.update | App\Http\Controllers\API\RestaurantAPIController#update | api |
| | GET|HEAD | api/restaurants/{restaurant}/edit | restaurants.edit | App\Http\Controllers\API\RestaurantAPIController#edit | api |
| | POST | api/send_reset_link_email | | App\Http\Controllers\API\UserAPIController#sendResetLinkEmail | api |
| | GET|HEAD | api/settings | | App\Http\Controllers\API\UserAPIController#settings | api |
Solution:
This worked for me after changing alot of things, I changed my GET request url from "www.domain.com/api/resturants" to "www.domain.com/public/api/resturants"
Well i don't know about your flutter code for i use different methods in retrieving data from api but about the routes i suggest you do like me
in web.php the route file
//Api routes
Route::get('/company/api/fetch', 'ApiController#fetch_companies');
my api controller
public function fetch_companies()
{
$companies = Companies::all();
return response()->json($companies);
}
this way you will get the data passed to the route /company/api/fetch (you can modify that as you want) and when a get request enter this page it will return json
and for the request handling in flutter side i suggest you make your functions and classes as it is in the documentations
Note: that the flutter solution that i suggested may not work with your case for you are using Stream which is different than this type of requests because this type runs only ones while the Stream runs many times and gets data every time it gets new data from the server

How accurate is this picture of how transactions are processed on the NEAR platform?

After reading more about how transactions are processed by NEAR I came up with this picture of how a few key parts are related.
I am seeking some pointers on how to correct this.
First a few key points I'm currently aware of, only some of which are illustrated below, are:
an Action must be one of 7 supported operations on the network
CreateAccount to make a new account (for a person, company, contract, car, refrigerator, etc)
DeployContract to deploy a new contract (with its own account)
FunctionCall to invoke a method on a contract (with budget for compute and storage)
Transfer to transfer tokens from one account to another
Stake to express interest in becoming a proof-of-stake validator at the next available opportunity
AddKey to add a key to an existing account (either FullAccess or FunctionCall access)
DeleteKey to delete an existing key from an account
DeleteAccount to delete an account (and transfer balance to a beneficiary account)
a Transaction is a collection of Actions augmented with critical information about their
origin (ie. cryptographically signed by signer)
destination or intention (ie. sent or applied to receiver)
recency (ie. block_hash distance from most recent block is within acceptable limits)
uniqueness (ie. nonce must be unique for a given signer)
a SignedTransaction is a Transaction cryptographically signed by the signer account mentioned above
Receipts are basically what NEAR calls Actions after they pass from outside (untrusted) to inside (trusted) the "boundary of trust" of our network. Having been cryptographically verified as valid, recent and unique, a Receipt is an Action ready for processing on the blockchain.
since, by design, each Account lives on one and only one shard in the system, Receipts are either applied to the shard on which they first appear or are routed across the network to the proper "home shard" for their respective sender and receiver accounts. DeleteKey is an Action that would never need to be routed to more than 1 shard while Transfer would always be routed to more than 1 shard unless both signer and receiver happen to have the same "home shard"
a "finality gadget" is a collection of rules that balances the urgency of maximizing blockchain "liveness" (ie. responsiveness / performance) with the safety needed to minimize the risk of accepting invalid transactions onto the blockchain. One of these rules includes "waiting for a while" before finalizing (or sometimes reversing) transactions -- this amounts to waiting a few minutes for 120 blocks to be processed before confirming that a transaction has been "finalized".
---.
o--------o | o------------------------o o-------------------o
| Action | | | Transaction | | SignedTransaction |
o--------o | | | | |
| | o--------o | | o-------------o |
o--------o | | | Action | signer | | | Transaction | |
| Action | | --> | o--------o receiver | --> | | | | ---.
o--------o | | | Action | block_hash | | | | | |
| | o--------o nonce | | | | | |
o--------o | | | Action | | | | | | |
| Action | | | o--------o | | o-------------o | |
o--------o | o------------------------o o-------------------o |
---' |
|
sent to network |
.---------------------------------------------------------------------------'
| <----------
|
| ---.
| XXX o--------o o---------o |
| XX | Action | --> | Receipt | |
| o--------------------------------o o--------o o---------o |
| | | |
| | 1. Validation (block_hash) | o--------o o---------o |
'--> | 2. Verification (signer keys) | | Action | --> | Receipt | | --.
| 3. Routing (receiver) | o--------o o---------o | |
| | | |
o--------------------------------o o--------o o---------o | |
transaction arrives XX | Action | --> | Receipt | | |
XXX o--------o o---------o | |
---' |
|
applied locally OR propagated to other shards |
.---------------------------------------------------------------------------'
| <----------
|
|
| --. .-------. .--. .--. .--. o-----------o
| o---------o | | | | | | | | | | |
'--> | Receipt | | Shard | | | | | | | | | |
o---------o | A | | | | | | | | | |
| --' | | | | | | | | | |
| | | | | | | | | | |
| --. | | | | | | | | | Block |
| o---------o | | Block | | | | | o o o | | | (i) |
'--> | Receipt | | | (i) | | | | | | | | finalized |
o---------o | | | | | | | | | | |
| | Shard | | | | | | | | | |
| o---------o | B | | | | | | | | | |
'--> | Receipt | | | | | | | | | | | |
o---------o | | | | | | | | | | |
--' '-------' '--' '--' '--' o-----------o
| |
'------------------------------------------------'
about 3 blocks to finality
It's unclear to me what you mean by "routed to more than one shard". A receipt can only be routed to one shard. Also I don't understand your description of finality gadget, and I don't know where you get "120 blocks" from. Normally you just need to wait for 3 blocks for a block to be finalized.
Great explanation! Core protocol devs should complete that picture and include in the low-level documentation!
There's some corrections. A Transaction with all its actions gets converted to a single Receipt. Receipts can have several actions too. Every receipt goes to a single specific shard/receiver account. In the case of a "Transfer" action inside a Transaction/Receipt, it can generate new receipts to complete the transfer:
e.g. Alice sends 100N to Bob
Receipt 1, action Transfer: acting on Alice's account. Alice's account gets 100N deducted. If that succeeds a 2nd Receipt is created:
Receipt 2- single action: act on Bob's account to "increase balance by 100N". This second receipt gets "published" to be routed to Bob's shard.
if the 2nd receipt fails (no Bob account) a 3rd Receipt is created to refund 100N to Alice. This 3rd Receipt is again published to be routed back to Alice's shard.
So every receipt (can have more than one action) but is directed to a single specific account and then a single shard.
.- At least this is what I understand 'til now -.
I'm reading the code Sherif, more details:
Even if a Transaction has more than one action, each transaction is converted to a single receipt. A Receipt can have more than one action, but a single ´receiver´.
All Receipts are validated. When routed to other shards (if the ´receiver´ account is not in the current shard) the receiving node will re-validate the receipt before processing. So there's no trusted/untrusted boundary. Everything gets re-validated in the nodes before processing.
All local receipts are processed first, then delayed receipts are checked (waiting for data), and then receipts received from other nodes are processed.
Some Recepits can be "Data Receipts", containing chunks of data required to execute other receipts. It's like sending input data for actions in chunks to other nodes. When all the data chunks are received the related "Action Receipt" is executed.
When an "Action Receipts" has all it's data, every action inside the receipt is executed: code
and code
There's a loop for every action in the receipt, and the action is applied to the receiver account.
.-to be continued-.
"Receipts are either applied to the shard on which they first appear or are routed across the network to the proper "home shard" for their respective sender and receiver accounts."
So here is my understanding; AccountID sends a transaction to the shard they are on e.g. assigned to for the given epoch since every epoch there is a reshuffling of accounts across shards. The shard (set of AccountIDs of validators etc.) verifies the transaction. If the receiver is on another shard, a receipt is created and routed to the other shard.
While the transaction from the sender can be included in the next block, it will take up to three blocks to validate it and finalize the routing to the receiver shard.

In RobotFramework, is it possible to run test cases in For-Loop?

So my issues might be of syntactic nature, maybe not, but I am clueless on how to proceed next. I am writing a test case on the Robot Framework, and my end goal is to be able to run ,multiple tests, back to back in a Loop.
In this cases below, the Log to Console call works fine, and outputs the different values passed as parameters. The next call "Query Database And Analyse Data" works as well.
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
But then, when I try to makes a test cases with documentation and tags with "Query Database And Analyse Data", I get the Error: Keyword Name cannot be Empty, which leads me to think that when the file gets to [Documentation tag], it doesn't understand that it is part of a test case. This is usually how I write test cases.
Please note here that the indentation tries to match the inside of the loop
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | | [Documentation] | Query DB.
| | | | [Tags] | query | voltagevariation
| | | Duplicates Test
| | | | [Documentation] | Packets should be unique.
| | | | [Tags] | packet_duplicates | system
| | | | Duplicates
| | | Chroma Output ON
| | | | [Documentation] | Setting output terminal status to ON
| | | | [Tags] | set_output_on | voltagevariation
| | | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
Now is this a syntax problem, indentation issue, or is it just plain impossible to do what I'm trying to do? If you have written similar cases, but in a different manner, please let me know!
Any help or input would be highly appreciated!
You are trying to use Keywords as Test Cases. This approach is not supported by Robot Framework.
What you could do is make one Test Case with a lot of Keywords:
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | Duplicates
| | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
*** Keywords ***
| Query Database And Analyse Data
| | Do something
| | Do something else
...
You can't really fit [Tags] anywhere useful. You can, however, fire meaningful fail messages (substituting the [Documentation]) if instead of using a Keyword directly you wrapped it in Run Keyword And Return Status.
Furthermore, please have a look at data driven tests to get rid of the :FOR-loop completely.

Moving dev site to production on new account AWS

I am in the process of moving and testing a development site on the actual domain name now and I just wanted to check if I was missing anything and also get some advice.
It is a Magento 1.8.1 install from Turnkey Linux running on an m1.medium instance.
What I have done (so far) is, create an image of the development instance, made a new account and copied it over to there. I then made an elastic IP and associated it with the new instance. Next I pointed the A name record of the production domain to the elastic IP.
Now, if I go to the production domain I get redirected to the development domain. Is there a reason for this?
Ideally I would like to have two instances, one dev one that is off unless needed and of course the production on which is going to be live 24/7. However if I turn the development domain off it stops the other too.
I have a feeling it's just because I need to change instances of the dev domain in the Magento database / back-end however I wanted to get a more knowledgable answer as I don't want to break either of the instance.
Also, I should probably mention that the development domain is a subdomain i.e. shop.mysite.com and the live one is just normal i.e. mysite.com. Not entirely sure this is relevant but thought it worth a mention.
Thanks in advance for any help.
The reason your URL on your new instance is getting redirected to the old URL is because in the core_config_data table of your magento database the web/unsecure/base_url and web/secure/base_url paths point to your old URL.
So if you are using mysql you can query your database as follows:
mysql> use magento;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select * from core_config_data;
+-----------+---------+----------+-------------------------------+-------------------------------------+
| config_id | scope | scope_id | path | value |
+-----------+---------+----------+-------------------------------+-------------------------------------+
| 1 | default | 0 | web/seo/use_rewrites | 1 |
| 2 | default | 0 | admin/dashboard/enable_charts | 0 |
| 3 | default | 0 | web/unsecure/base_url | http://magento.myolddomain.com/ |
| 4 | default | 0 | web/secure/use_in_frontend | 1 |
| 5 | default | 0 | web/secure/base_url | https://magento.myolddomain.com/ |
| 6 | default | 0 | web/secure/use_in_adminhtml | 1 |
| 7 | default | 0 | general/locale/code | en_US |
| 8 | default | 0 | general/locale/timezone | Europe/London |
| 9 | default | 0 | currency/options/base | USD |
| 10 | default | 0 | currency/options/default | USD |
| 11 | default | 0 | currency/options/allow | USD |
| 12 | default | 0 | general/region/display_all | 1 |
| 13 | default | 0 | general/region/state_required | AT,CA,CH,DE,EE,ES,FI,FR,LT,LV,RO,US |
| 14 | default | 0 | catalog/category/root_id | 2 |
+-----------+---------+----------+-------------------------------+-------------------------------------+
14 rows in set (0.00 sec)
and you can change it as follows:
mysql> update core_config_data set value='http://magento.mynewdomain.com' where path='web/unsecure/base_url';
mysql> update core_config_data set value='https://magento.mynewdomain.com' where path='web/secure/base_url';

Communication between two applications using Environment Variables

Question
How to communicate with another program (for instance, a windows service one) through environment variables (not system or user ones)?
What do we have
Well, I have the following scheme for a data logger:
------------------------- --------------------------------
| the things to measure | | the things that do something |
------------------------- --------------------------------
| ^
| sensors | switches
V |
-------------------------------------------------------------------
| dedicated hardware |
-------------------------------------------------------------------
| ^
| | serial communication
V |
--------------- -------------
| Windows | ------------------------------------> | user |
| service | <------------------------------------ | interface |
--------------- udp communication -------------
|^ keyboard
V| and screen
--------
| user |
--------
On current development:
windows service is always running when Windows is running
user can open and close user interface (of course :p)
windows service acquires data from sensors
user interface automatic requests data to windows service every 100ms and shows it to user via udp communication through some implemented protocol (we call it GetData() command and response to it)
user can send some other commands to change the data to acquire through implemented protocol (we call it SetSensors() command and response to it)
Both user interface and windows service are developed on Borland C+ Builder 6 and use NMUDP component, from FastNet tab, for UDP communication.
What we are thinking to do
Because of some buffer issues and to free udp channel only for sending SetSensors()command and response to it, we are considering that instead of using GetData():
Windows service would get data from sensors and put them on environment variables
the user interface would read them to show to user
Scheme after doing what we are thinking
------------------------- --------------------------------
| the things to measure | | the things that do something |
------------------------- --------------------------------
| ^
| sensors | switches
V |
-------------------------------------------------------------------
| dedicated hardware |
-------------------------------------------------------------------
| ^
| | serial communication
V |
--------------- -------------
| | ------------------------------------> | |
| | environment variables | |
| | (get data from sensors) | |
| Windows | | user |
| service | | interface |
| | | |
| | ------------------------------------> | |
| | <------------------------------------ | |
--------------- udp communication -------------
(send commands to service) |^ keyboard
V| and screen
--------
| user |
--------
Any way to do that?
We would not use system and user environment variables, because it writes on Windows Registry, i.e., it will save to hard drive and it gets more slow...
As #HansPassant said, I cannot do that directly. Although I saw some ways to do that via memory mapped file, it is so easy only to add one more udp communication channel through other port. So:
------------------------- --------------------------------
| the things to measure | | the things that do something |
------------------------- --------------------------------
| ^
| sensors | switches
V |
-------------------------------------------------------------------
| dedicated hardware |
-------------------------------------------------------------------
| ^
| | serial communication
V |
--------------- -------------
| | ------------------------------------> | |
| | udp communication (port 3) | |
| | (get data from sensors) | |
| Windows | | user |
| service | | interface |
| | (port 1) | |
| | ------------------------------------> | |
| | <------------------------------------ | |
--------------- udp communication (port 2) -------------
(send commands to service) |^ keyboard
V| and screen
--------
| user |
--------
If someone provide a better solution, I'll mark it as solution in future.

Resources