Exchange 2007 - GetUserAvailability over 128 mailboxes? - exchange-server

When making a GetUserAvailability call passing in 128 mailboxs Exchange 07 returns an EmailAddressArray error stating the allowed size of the array is 100.
Is there a way to increase the array size beyond 100, so that Exchange 07 returns with a GetUserAvailablity request?
I'm currently getting the following error:
System.Web.Services.Protocols.SoapException: Microsoft.Exchange.InfoWorker.Common.Availability.IdentityArrayTooBigException: There are too many target users in the EmailAddress array. The allowed size = 100; the actual size = 128. ---> There are too many target users in the EmailAddress array. The allowed size = 100; the actual size = 128.

No, at this time there is no way to increase this maximum number. Sorry, Richard
"The Availability service expands distribution lists to retrieve the free/busy status for each member of the list, as long as the number of mailboxes in the distribution list is less than 100 identities, which is the maximum number of identities that the Web service method can request."
Source: MSDN Library > Exchange Web Services Operations > GetUserAvailability Operation

Related

BEP20 token not showing total supply as i entered on code on bscscan

I have deployed a BEP20 token. I followed the steps shown in this tutorial https://docs.binance.org/smart-chain/developer/issue-BEP20.html
and I entered total supply = 60000000000 but after varifying, the total supply is not showing which was I entered. Can anyone help me to add the total supply? The contract address is 0xE2cFe49999e3a133EaFE13388Eb47BCd223f5c5E
Your token uses 18 decimal places. Which means that the value 60000000000 hardcoded on line 359 of your contract represents 0.00000006 of the token. The BSCScan token tracker shows total supply 0 AAG just because it rounds to some predefined amount of decimals.
If you want a total supply of 60 billion, you need to add 18 zeros after this number to account for the decimals.
_totalSupply = 60000000000 * 1e18;

Is there any API endpoint to see the current maximum supply for EGLD?

According to Elrond Economics paper the maximum supply for EGLD is 31,415,926 (theoretical).
However, this theoretical cap is actually decreased with each processed transaction and generated fees.
Is there any API endpoint that returns the actual maximum supply (adjusted based on the economics)?
The closest endpoint that I found is:
https://api.elrond.com/economics
which returns:
...
"totalSupply": 22711653,
"circulatingSupply": 20051653,
"staked": 12390333,
...

Youtube API - Subscriptions list returns different number of total results in set

I'm trying to get the complete list of my subscriptions. I've tried 3 methods, all of them returns different amount of subscriptions and I don't know what to do :)
1: Using Subscriptions: list with channel ID:
https://www.googleapis.com/youtube/v3/subscriptions?part=snippet&channelId=MY_CHANNEL_ID&maxResults=50&key=MY_API_KEY
"totalResults" is 942
2: Using Subscriptions: list with "mine" flag. the "totalResult" field is 991.
Where do 49 subscriptions appear from?
3: Open browser in incognite mode, go to
https://www.youtube.com/channel/MY_CHANNEL_ID
Click on "Channels" tab, scroll down to the end of the subscriptions list, open console and type something like that
document.querySelectorAll("#contents #items > *").length
I see 1039. Where do another 48 subscriptions come from?
And the 1039 seems to be the most accurace number - I have 6 subscriptions in a row and the last row has only 1 item. 173*6+1 = 1039
So the questions is - how do I get all the 1039 subscriptions by API? And why does it return wrong amount of subscriptions?
You are using Subscriptions: list and shouldn't have such kind of bugs with totalResults however maybe there is a YouTube Data API v3 endpoint bug as documented in Search: list totalResults is:
integer
The total number of results in the result set. Please note that the value is an approximation and may not represent an exact value. In addition, the maximum value is 1,000,000.
You should not use this value to create pagination links. Instead, use the nextPageToken and prevPageToken property values to determine whether to show pagination links.
So I would recommend you to enumerate all subscriptions you have with the different methods you explained and so count on your own by using nextPageToken.

How do "geth", "EstimateGas", and "Suggest (Gas) Price" work?

My friend asked me how does geth estimates gas limits and gas prices. How does it do this?
If you send transactions without gas limits or gas prices via RPC API, geth uses Estimate() or SuggestPrice() instead. Remix uses these, too. These behaviors are of geth v1.8.23. Different versions may work differently.
EstimateGas
input: block number (default: "pending"), 'gas limit' of the transaction (default: gas limit of the given block number)
EstimateGas tries to find a minimal gas to run this transaction on the given block number. It do a binary search between 21000 and 'gas limit'. For example, if 'gas limit' is 79000, it tries to run this transaction with the gas limit, 50000 = (21000 + 79000) / 2. If it failed, it tries with 64500 = (50000 + 79000) / 2, and so on. If it failed with 'gas limit', it returns 0 and error message, "gas required exceeds allowance or always failing transaction".
NOTE: Even if a transaction fails due to non-gas issues, it consider a failure as insufficient gas. Then it will return 0 with an error message in the end.
source: geth /internal/ethapi/api.go
Suggest(Gas)Price
input: number of blocks to search (default: 20, --gpoblocks), price percentile (default: 60, --gpopercentile), fallback result (default: 1 GWei, --gasprice)
SuggestPrice queries gas prices of 'number of recent blocks' from "latest" block in parallel. If it cannot get answers over half of 'number of blocks' due to any reasons, it will query more blocks up to five times 'number of blocks'.
A gas price of a block means a minimum gas price within transactions in that block. Transactions that a miner sent are excluded.
SuggestPrice sorts gas prices of blocks, then picks up the given percentile among prices (0 for the smallest price and 100 for the largest price). It caches this result, and returns a cached result immediately for the same "latest" (mined) block.
If all tries are failed, it returns a last result. If there is no last results, it returns a 'fallback result'. And SuggestPrice cannot return over 500 GWei.
source: geth /eth/gasprice/gasprice.go

Virality algorithm for different type of objects

For a project I need to rank certain objects based on events on/with that specific object. But the objects to be ranked aren't alike.
Some background: the application is a social-network-like document-management system. There are a lot of users, who can upload/post 'documents' of various types (video's, external articles - eg. found on a relevant blog -, articles written within the system itself etc.). But also user-to-user messages should appear in the feed, as well as system messages, etc.
To break it down a little, let's assume these three objects should appear in the news-feed, ranked/sorted on virality, which is based on events.
Documents
System messages
User-to-user, or user-to-group) messages
A few parameters that are important for the ranking, per object:
Documents
Number of views
Number of comments
Number of shares
Affinity with the document (user has commented on it, shared it, etc.)
Correspondence of tags the user is enlisted to
System messages
Importancy level (eg. 'Notice', 'Announcement')
User/group messages
Level of engagement in the conversation
And to make it harder, the date the object was created is important, as well as the date and correlation of the occuring events. And to add up one more to the complexity: pretty much everything is relative; eg. the number of views for a document needed to define it as 'viral' and as such make it appear in the news-feed depends on the average number of views. Same goes for comments, but for comments the posted date and time between posting of new comments is important as well.... (oh and in case it wasn't clear, ranking is always relative to a user, not system-wide).
My first thought was to define a max score (Sm) for each object, define when an object reaches the Sm and calculate the actual score (Sa). Ie. the system messages have a Sm of 100, user/group messages 80 and documents have a Sm of 60. This means that if one of each object is created at exactly the same time, and no other parameters (comments etc.) are available yet, the system message will be listed first, the user message will come next, and last, but not least, the document.
So for each type of object, I'm looking for a formula like:
S(a) = S(m) * {calculations here}
For the system message it isn't that hard I guess, as it only has two parameters that affect the Sa (date and importancy level). So it's scoring formula could look like (I is numeric importancy level):
S(a) = S(m) * I * (1 / (now() - date_posted())
Let's assume a notice would have I=10 and announcement has I=20, the scores for a notice posted yesterday and an announcement posted 2 days ago, would be:
Notice: S(a) = 100 * 10 * (1 / 1) = 1000
Announcement: S(a) = 100 * 20 * (1 / 2) = 1000
Now for the documents, and I'm really breaking my head on that one...
I've got the following parameters to take into account:
V(o) = number of views
V(a) = average number of views
C(o) = total number of comments
C(a) = average number of comments on this type of object
C(u) = number of comments by the user
SH(o) = total number of shares of this object
SH(a) average number of views of this type of object
SH(u) = has the user shared the document (1 = no, 2 = yes)
T = number of enlisted tags
I found a simplified example of how Facebook calculates 'virality' here. They use the following formula:
Rank = Affinity * Weight * Decay
And if I translate that to my use-case, the affinity would be the outcome of a calculation on the parameters listed above, the weight would be the score-max altered a bit based on the total number of views and shared divided by the average number of views and shares, and the decay would be a complex calculation based on the correlation of the events fired and the date the object was created.
I'm giving it a try:
Affinity = C(u) * SH(u) * T * SH(u)
Weight = S(m) * (V(o) / V(a)) * (SH(o) / SH(a)) * (C(o) / C(a))
Decay = (1 / (now() - date_created())) * (1 / (now() - date-of-last-comment())
This will get me some kind of ranking, but it lacks a few things:
it has no relation whatsoever with the ranking of a system message, and thus sorting would be meaningless
the frequency of new comments isn't taken into account
So now I'm stuck...
To get to the point, my questions are:
Is this a good approach, or should I try something totally different?
If so, what direction should I go to?

Resources