Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have two datasets that update two separate reports respectively, and I have set the datasets to automatically refresh every morning at 10:30. I am refreshing from the PowerBI service from the cloud. There is a gateway included. The scheduled refresh for one report took 17 minutes to update whilst the refresh on the other took over an hour. When I manually refresh, it doesn't take longer than two minutes. Is there anything I can do to shorten the length of the scheduled refresh?
Your question is very broad, it should possibly be closed, but I'll give you a few places to look.
Are you manually refreshing from the PowerBI service? Remember that PowerBI service refreshed from the cloud and PowerBI desktop refreshed from your machine. So the location of your data sources matters here. Further, if there is a gateway involved, this will add in an additional hop to the PowerBI service.
Do the reports compete for a database or API resource? What happens if you schedule one for 10:00 and one for 10:30 to see if there are any race conditions in any shared sources?
How busy are the sources at 10:30 am - are there other jobs running (e.g. backups).
Try various times and combinations to see where the delays may be. If you have access to the sources, add monitoring to them to understand how they are performing.
Also keep in mind that with the recent Microsoft outages, things seem to be a bit slow at the moment at times, so if this is a recent issue then it may be transient.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 12 months ago.
Improve this question
For reporting purposes in an organization, someone exports the result of a query on an oracle database, after changing the date parameters every month, and sends the excel on outlook, to a receiver (analyst), there are different receivers (analysts) and different queries with With N:N (many-to-many) relationship between them
I m working on making this process more "automatic", I thought of these approaches:
Deploy a web application on my computer, with an Authentication page, then every user is taken to a list of reports that he is allowed to view, then he can choose a maxdate and a mindate value, and then he can download the excel file, with data exported from the oracle database
A batch script that's executed every end of the month (or a date chosen by the analysts), executing the oracle query, exporting the result to an excel file: 2.1 And then send the file on outlook 2.2 OR save the file on a folder on my computer, and make that file accessible on the local network by the different analysts
I want to get opinions on other approaches (hopefully more minimal and easiest to scale), and what's the pros and cons of the two approaches I've presented, and how I can best implement them
Option 1 sounds like Oracle Application Express (Apex). Even if you aren't an experienced developer, in a matter of a few hours you should be able to create a working web application.
What should you do?
talk to DBA, ask them to install Apex
when they provide login credentials to you (presuming they'll also create a workspace for you), create a new application
you'd mostly use Interactive Reports
if all data you need is in one table, even better
if not, you'll have to write a query which joins several tables, but hey - you already have those queries, haven't you?
Interactive Report lets users filter data in various ways
you can download result in Excel format so they can analyze data the way they are used to; or, perhaps even better, continue using Apex
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
My spring OAuth2.0 authorization micrservice is extremely slow. It takes 450+ms to check a token.Generating tokens takes 1.6s and above. What could be the reason? How can I improve the performance of my microservice ?
Details:
Auth and Microservices are running on my laptop
The time I mentioned is for the auth server with requests from only one microservice
Thanks in advance
Download a tool such as VisualVM to perform profiling of your application.
I would also record the elapsed time of individual methods to determine exactly which methods are taking the longest amounts of time.
Once you can verify exactly what code is taking awhile, you can attempt JVM optimizations, or review the code (if you're using an external library) and verify the implementation.
There might be three reasons,
Your services might be in different regions and OAuth2 server might be central one and in different region. If this is the case create instance of OAuth servers in all regions which you use so that your latency can be improved.
Check the Encryption techniques you used. always it's preferred to use SHA-256 Hashing but this might not be complete reason in some cases this could help.
Check your OAuth server Capacity, i.e. it's RAM processor and Storage volume. It might also be reason that multiple services makes same /generatetoken call to server and Tomcat makes it as One Thread per request and if this the case configuring your connection pool will also help.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have different applications running with Meteor JS. Each one starts its server under different ports when running locally. When hosted, I have them under sub-domains like messenger.mydomain.com, courses.mydomain.com, mydomain.com. I'm now considering performance if running multiple apps like this would have more odd impact on the server or combining the files into one so that all the 3 apps will be like: mydomain.com/messenger, mydomain.com/courses, mydomain.com.
Based on these two scenarios which will have a far more negative impact on the server?
I too have hosted 4-5 such Meteor apps on a single server using PM2 process Manager. Initially when a Meteor app starts it utilizes around 80-90 MB of RAM. Further, the consumption of the RAM and other parameters will depend on how good is your server CPU, Cores, etc.
Generally is it not an issue hosting such servers. But the most important fact you should remember is the no. of concurrent users connecting your different Meteor instances at various point of time. Currently, I have 40 concurrent users hitting on 4 instances each.
You just need to maintain server check every in order to purge process logs, even you might restart the servers as needed during downtime.
Below is the example of my hosting on CENTOS7. Everythings goes smoothly if you consider above precautions.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am developing an application and using spring-boot-application for my health endpoint, my application is interacting with several third party services which I need to include in my health check, the problem is when the services does not respond I keep waiting until the ping times out so I know they are not available , this takes long time and I want to avoid this.
One solution was to run all the check for the services in parallel, this will reduce the time incase of the timeout significantly but still I have a bottleneck of one timeout.
Another solution would be to keep checking the services in the background periodically (using scheduler) and cache the last result of the check so when our monitor asks for health information, health endpoint will return the cashed result.
Are there any more practical solutions? Or are there any known best practices for such a scenario?
I want to post how I tageled this issue:
Since The application is dependent on third parties and without the connection to these third parties the application is not considered functional ( not a microservice ) there was no escaping checking the health of these third parties
To address the issue I did the following:
1- Identify the critical third parties, the application connects to various third parties, some of them are critical and some of them are not (most probably this will involve a business decision), so it is important to identify the critical third parties, for the non-critical third parties I excluded them from the health check
2- Perform a periodic check for the health of the critical third parties and cache it, ofc this will introduce a delay in fetching the health so the period of the check should be agreed on, for me it was ok to have a small delay ( 3 mins) so this was the interval of the period for me
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm using phoenix controllers to receive data via REST calls. So an iOS app could send the "events" for each user and based on the event, I need to calculate the score/points and send it back to the user. Calculation and sending back to the user can happen asynchronously. I'm using Firebase to communicate back to the user.
What is a good pattern to do calculation? Calculate could be bunch of database queries to determine the score of that event. Where should this calculation happen? Background workers, GenEvent, streams within user-specific GenServer (I have supervised GenServer per user).
I would look at Phoenix channels, tasks and GenServer.
Additionally, if you would like to manage a pool of GenServer workers to do the calculations and maybe send back the results for you, check out Conqueuer. I wrote this library and it is in use in production systems for my company. It is uses poolboy, which is probably the most pervasive pool management library in Erlang/Elixir.
Admittedly, I do not fully understand the requirements of your system, but it does not seem to me GenEvent has a place in your requirements. GenEvent is about distributing events to one or more consumers of events. So unless you have a graph of processes that need to subscribe to events being emitted from other parts of your system I do not see a role for it.