Fetch all Slack channels (public, private, DMs, group chats) [closed] - slack

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 hours ago.
Improve this question
I'm trying to build an auto-complete/search Slack channels in a usual drop down, where I need the list of all the Slack channels the user has access to in a workspace.
Please note that this list of channels will be displayed in a dropdown in a web application and not inside the Slack application. I will need the following information to display in the dropdown for autocomplete/search functionality:
All public channels in the workspace.
Private channels the user is a part of.
Group chats the user is a part of.
Direct messages
I've tested the conversations.list method to achieve this (https://api.slack.com/methods/conversations.list), but I feel that I might run into rate limiting issues quickly. The documentation mentions a tier 2 rate limiting for the method. Quoting from the documentation
Limit - 20+ per minute. Most methods allow at least 20 requests per minute, while allowing for occasional bursts of more requests.
The statement is a little confusing as it doesn't specify an upper bound for the rate limit. Especially the keyword "at least 20 requests per minute" is a little confusing. How many requests can I make and hit the method in one minute? Only 20? More than 20? 50? 100? What is the cap?
Also, in my use case, there are high chances that there would be users from workspaces having thousands of public channels and few users trying to search for the channels concurrently in the web application for their workspace. And this is the reason I feel I might run into rate limiting issues.
How can I accomplish this scenario?
FYI, the backend is in Java for the web application.

Related

Azure Web Application Gateway performance with load test [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a Visual Studio load test that runs through the pages on a website, but have experienced big differences in performance when using a load balancer. If I run the tests going straight to Web Server 1 bypassing the load balancer I get an average page load time of under 1 second for 100 users as an example. If I direct the same test at the load balancer with 2 web servers behind it then I get an average page load time of about 30seconds - it starts quick but then deteriorates. This is strange as I now have 2 web servers load balanced instead of using 1 direct so I expect to be able to increase load. I am testing this with Azure Web Application Gateway now, and Azure VMs. I have experienced the same problem previously with an NGinx setup, I thought it was due to that setup but now I find I have the same on Azure. Any thoughts would be great.
I had to completely disable the firewall to get the consistent performance. I also ran into other issues with the firewall, where it gave us max entity size errors from a security module and after discussing with Azure Support this entity size can not be configured so keeping the firewall would mean some large pages would no longer function and get this error. This happened even if all rules were disabled, I spent a lot of time experimenting with different rules on/off. The SQL injection rules didn't seem to like our ASP.NET web forms site. I have now simulated 1,000 concurrent users split between two test agents and the performance was good for our site, with average page load time well under a second.
Here are a list of things that helped me to improve the same situation:
Add non-SSL listener and use that (e.g. HTTP instead of HTTPS). Obviously this is not the advised solution but maybe that can give you a hint (offload SSL to the backend pool servers? Add more gateway instances?)
Disable WAF rules (slight improvement)
Disable WAF + Added more gateway instances (increased from 2 to 4 in my case) - SOLVED THE PROBLEM!

Backup domain controller with exchange remove it? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
So in my network I have the main domain controller and a backup domain controller. The backup domain controller has exchange on it.
The exchange services have been shut down as I have moved email hosting off site. So I now have no need of that backup domain controller that was running exchange. I want to shut it down for good.
What would be the proper way to remove it from its role in active directory and a backup domain controller?
Both domain controllers are Server 2008.
Thanks much!
Firstly, just don't do it, this is a SysAdmin SIN! Your shooting yourself in the foot. Even for my smallest customers with only 10 members of staff, I often have them purchase a second server to act as a secondary domain controller, DNS server, DHCP Server etc.
It is the first and few things Microsoft recommends as best practice when setting up a domain and one of the first things that is taught to you when you do the MCSA course: When creating a domain a secondary domain controller should be set up. If you have more than 20 Users its a must IMHO. Many things can go wrong and too many times clients have incurred big bills(man hours) because they didnt spend that extra £2000 on another server. I strongly recommend you keep it. It's not just availability, it prevents a large number of corruption issues which can linger for weeks before presenting themselves which makes 7x daily backups no help. It's your sefety net.
If you must get rid of it, first check is doesn't hold the FSMO roles and run dcpromo following the steps here: http://technet.microsoft.com/en-us/library/cc771844(v=ws.10).aspx
Lastly, your getting down-voted because StackOverflow only like coding qustions and they want you to use ServerFault which is part of the same family.

How to write User Stories for technical implementation details? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to work in a more organised way and started adopting user stories.
I think I have misunderstanding of how should I use user stories for technical stuff.
Let's say I'm coding an app that gives me the ranking of my site for a certain Keyword in Google.
The user story goes like that:
As an Internet Marketer
I want to find out where my website ranks for a keyword
So I'll know whether my SEO efforts work
Now this is pretty straight forward and user centric... However, what happens if I need to introduce Proxies into the loop.
On one hand, Proxies are technical implementation detail on the other hand, proxies is part of the Internet Marketer's domain.
How should I craft such story?
As an Internet Marketer
I want to use Proxies when searching in Google
So we'll be able to check a lot of keywords without Google blocking us
The above scenario doesn't sound right for me... Maybe I can rewrite it to be something like:
As an Internet Marketer
I want to be able to check a lot of Keywords at a time
So it'll save me time
This sounds more right, however what acceptance criteria can I give it? try scraping google 100 times in a min? Isn't it waste of time?
Here's another scenario. How should I craft a user story when the feature I want to implement is that a proxy can be used once in 30 seconds? I don't have any idea of how to approach this problem from a user centric perspective...
Another thing I thought of doing is to present another Role. Instead of being centered around Internet Marketer, I can say we have a role called Google Scraper. I can say that Internet Marketer is in relation with Google Scraper.
Now I can write a user story like:
As Google Scraper
I want to change proxies every Search
So Google won't ban me
What would you say about approaching technical implementation details like above? It can also help breaking the system down into modules...
You don't write technical stories. User stories should meet the INVEST criteria.
Proxies do sound like an implementation detail and should be avoided. You should not be mentioning proxy servers in your story. Even if they are part of the domain, there are potentially other ways to achieve the same effect.
Instead of writing "I want to use a Proxy, so that I don't get blocked", you should write, "I want to disguise my identity, so that I don't get blocked". If I was your customer, I wouldn't know why you wanted a proxy? Is it a forward, open or reverse proxy? There are loads of uses for a proxy server. You should pick the feature that you want to exploit.
However, you shouldn't get too hung up on perfect stories. The agile manifesto says, "Individuals and interactions over processes and tools".
When writing a user story, you should also consider the 3 C's: Card, Conversation, Confirmation. Do both the customer and you understand the meaning of the story?
Does the card meet INVEST criteria? If you answered yes to both those questions then the story is fine.
User Stories should not include technical details. During Sprint Planing technical details should be added as Delivery Team tasks nested below the User Story. These tasks should be created through discussion by the delivery team. You should not attempt to document every implementation detail under the sun as you will reach a point of diminishing return. Aim for 60-75 percent coverage on implementation details (tasks) for each user story as the details may change as coding begins. Any additional details developer discover during coding can be shared and documented briefly during the daily stand-up. should The User Story can be simple and non-technical while the Delivery / Development Team will flesh out story details as nested Tasks.
These Task should be visible to Developers through their Integrated Development Environment (IDE). As Developers complete tasks they can associate their checked in code with the task in your work item tracking tool (Jira, Team Foundation Server, On-Time)

Openfire performance on EC2 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We are planning to introduce real time chat feature in our mobile apps. Ofcourse we would be going the XMPP way.
Can anybody shed some light on stats for maximum number of concurrent users Openfire has supported on EC2 instances (windows server) of different sizes in the real world?
We are looking at numbers ranging from 22500 concurrent users to 75000 concurrent users depending upon growth patterns predicted for app downloads and user adaptability for this new real time chat feature. time range = next 12 months.
From whatever googling I have done so far, it seems Openfire may not be the best bet when it comes to scaling out so can these numbers be supported on a single instance of ec2 over time? ie: we start hosting on smaller instances and keep increasing instance size as load demands.
Ejabbered seemed to be the best option when it comes to scaling out but since we would need to have erlang skills in order to extend it makes ejabbered a difficult choice for us. The other alternate is tigase which is java so we could extend it much easily but if Openfire can work for us for the next 12 months or so by scaling up versus scaling out, we would be happy to use it for now and see how well this new chat feature is embraced. Number one reason being ease of management.
Lastly, if you could help with links on SaaS / PaaS providers for XMPP chat + Push Notifications to mobile devices when user is offline, it would be awesome. We got in touch with quickblox.com but their enterprise offerings appear to be expensive for us at the moment. We want 100% ownership and portability of our data if we go the SaaS / PaaS way.
There are several references to Openfire handling those and larger numbers of concurrent users on a single server.
There is document on scalability from 2007 that shows 50000 users supported on version 3.2. The current release is 3.7.1. Don't forget that that also means a much slower machine than anything you are likely to run on today.
You also have to take into account what features of XMPP you will be using, but simple messaging should be able to easily handle the numbers you are referring to.
The numbers you mention should be easily handled by ejabberd.
I am unsure as to how you want to "extend" ejabberd. Multi-user chat and messaging are handled fine by all servers and of course ejabberd. Additionally, if you are thinking of custom protocols, these can be written in your language of choice and connect to ejabberd as an XMPP component.
The only thing you might miss is a web interface (which ejabberd has but it's rather limited), but then again if you expect to manage things through a web UI for an application, you will need to think again ;)
if you want to go with ejabberd, you can always get support from ProcessOne.
This is another plus for ejabberd, as it can be commercially supported if you want to / can afford.
Android Push Notification is a good solution.
With Android-Push services, you (Android developers) can send messages directly to the people who have installed your app. All you need is to include a code snippet into your app, and post to a specific URL to reach your app users, even if your app is inactive on their phone.
Feature:
Free
Free, unless you need extensive number of pushes for your app. Of course you can pay for more push and a quicker tech support.
Easy
Extremely easy to integrate into your app
Super simple to push to the app: just send a URL request
No C2DM limit, you don't have to have a gmail account to use the push service
Cloud service, no need to setup your own push server
Effective
Low battery and network consumption on the phone
Track user interaction, find out how users react to your push

How does SMS gateways work? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've been looking at systems such as txtlocal, esendex and clickatell. I need to send out a very large number of messages and ideally would like to go in at a lower level then using systems like these. Does anyone know how these SMS gateways like I've listed work in terms of actually sending out the messages? Will they have agreements with different carriers and be sending them out programmatically? I've tried contacting some UK carriers directly but as of yet haven't had any success getting any information from them.
Aggregators typically work by talking directly with a mobile carrier's internal SMSC using IP/X.25/frame relay and using a protocol like SMPP/CIMD to request a message send.
They will have connections to multiple networks SMSC's so they can do least cost routing (i.e sending a message to a user on their home network being cheaper).
Here are some contact details for Orange/Voda.
That said, MXTelecom as mentioned by Phill offer a good gateway service, as do mBlox - both of whom have already done all the hard (and expensive) work for you.
Working with an aggregator is definitely worth the effort. They handle the legal contracts with the providers as well as with the auditing services. You can go directly to a provider (e.g. AT&T, etc.) and broker the deal yourself but generally speaking you'll only need that if you have very specific program/campaign needs. Coke, for example, brokered their own deal to get the four-digit shortcode for COKE (2653).
Keep in mind, when working with an aggregator like MXTelecom you'll be signing a contractual agreement with them (usually for 6 to 12 months) and it'll take between 8-12 weeks (in the US) to get your shortcode provisioned and setup. It's not the funnest process, IMHO.
Oh, and don't forget, they will audit your system to make sure it does what it says it will do in your campaign document.
It is also possible to create your own system (at least in the US) and use a long code. One of our original prototype systems was built with Kannel using a mobile phone tethered to an Ubuntu box. With an unlimited plan it was quite nice. Usage is related to your carrier contract so be mindful.
Per your question of how they work... They generally work via an API (HTTP or SMPP are most common). Depending on your in/out volume you may want to put a queue in between your application and the aggregators API.
First if you're going to do any bulk SMS messaging you should get a Short Code. An aggregator will have all the necessary API's/SDK and documentation for you.
Try MXTelecom (AKA OpenMarket)

Resources