working on integrating Azure devops Git into my project.
will involve multiple calls to ADO's Git REST API.
specificly to this hardpoint
I can't locate any documentation regarding best practices so i tried looking for performance matrices.
Did not find anything relevant there as well.
what is the best way to generate these results on my own?
to just download large amount of data and see how long it takes?
Related
For example,
You have an IT estate where a mix of batch and real-time data sources exists from multiple systems, e.g. ERP, Project management, asset, website, monitoring etc.
The aim is to integrate the datasources into a cloud environment (agnostic).
There is a need for reporting and analytics on combinations of all data sources.
Inevitably, some source systems are not capable of streaming, hence batch loading is required.
Potential use-cases for performing functionality/changes/updates based on the ingested data.
Given a steer for creating a future-proofed platform, architecturally, how would you look to design it?
It's a very open-end question, but there are some good principles you can adopt to help direct you in the right direction:
Avoid point-to-point integration, and get everything going through a few common points - ideally one. Using an API Gateway can be a good place to start, the big players (Azure, AWS, GCP) all have their own options, plus there's lots of decent independent ones like Tyk or Kong.
Batches and event-streams are totally different, but even then you can still potentially route them all through the gateway so that you get the centralised observability (reporting, analytics, alerting, etc).
Use standards-based API specifications where possible. A good REST based API, based off a proper resource model is a non-trivial undertaking, not sure if it fits with what you are doing if you are dealing with lots of disparate legacy integration. If you are going to adopt REST, use OpenAPI to specify the API's. Using this standard not only makes it easier for consumers, but also helps you with better tooling as many design, build and test tools support OpenAPI. There's also AsyncAPI for event/async API's
Do some architecture. Moving sh*t to cloud doesn't remove the sh*t - it just moves it to the cloud. Don't recreate old problems in a new place.
Work out the logical components in your new solution: what does each of them do (what's it's reason to exist)? Don't forget ancillary components like API catalogues, etc.
Think about layering the integration (usually depending on how they will be consumed and what role they need to play, e.g. system interface, orchestration, experience APIs, etc).
Want to handle data in a consistent way regardless of source (your 'agnostic' comment)? You'll need to think through how data is ingested and processed. This might lead you into more data / ETL centric considerations rather than integration ones.
Co-design. Is the integration mainly data coming in or going out? Is the integration with 3rd parties or strictly internal?
If you are designing for external / 3rd party consumers then a co-design process is advised, since you're essentially designing the API for them.
If the API's are for internal use, consider designing them for external use so that when/if you decide to do that later it's not so hard.
Taker a step back:
Continually ask yourselves "what problem are we trying to solve?". Usually, a technology initiate is successful if there's a well understood reason for doing it, which has solid buy-in from the business (non-IT).
Who wants the reporting, and why - what problem are they trying to solve?
As you mentioned its an IT estate aka enterprise level solution mix of batch and real time so first you have to identify what is end goal of this migration. You can think of refactoring applications. If you are trying to make it event driven then assess the refactoring efforts and cost. Separation of responsibility is the key factor for refactoring and migration.
If you are thinking about future proofing your solution then consider Cloud for storing and processing your data. Not necessary it will be cheap but mix of Cloud and on-prem could be a way. There are services available by cloud providers to move your data in minimal cost. Cloud native solutions are there for performing analysis on your data. Database migration service in AWS or Azure can move data and then capture on-going changes. So you can keep using on-prem db & apps and perform analysis for reporting on cloud. It will ease out load on your transactional DB. Most data sync from on-prem to cloud is near real time.
I'm a newbie web developer and I have a basic question regarding my Laravel based website: Where should I put my files? I know there are services like Amazon S3, but firstly I don't know how to work with them, and second they are NOT FREE.
There is going to be a fairly large amount of data including pics and videos (around 10 GB).where should I store them? And how should I use Laravel to allow users to upload files?
If it will be a bigger project, you should use a cloud service. This is going to be the future of backend development as it is making your project much easier and faster to mantain and run. If you want to make your own backend, this will take a long time to get it done, since you have to learn a lot of new things and should be good at it. There would be many key aspects you have to be aware of. Like securitiy, scaling, performance and so on ... Like you suggested Amazon AWS or imo much better Google Firebase. I think Google Firebase should be your pick because it is really easy do understand and has a great documentation. Next to the storing service (Google Cloud Storage) there are many several services you could use in the future like analytics, machine learning or nosql databases. And the good thing is that you can connect them all together.
With Google Firebase you have a Free Spark Plan which is completely free with some limitations. And if you scale to many users you can upgrade to the other plans, which is not very expensive. Don't forget that your own Back-End would cost you time and also money for the electricity and hardware cost.
If you have more questions be free to ask me :)
I work devops for a fairly large company that is in process of transitioning to microservices. This is a new area for most people involved and some of the governing requests seem like bad practice to me but I don't have the expertise to convince otherwise.
The request is to generate a report before deploying that would list any new api/events (Kafka is our messaging service) in a microservice.
The path that's being recommended is for devs to follow a style guide and then scrape the source code during CI/CD pipeline to generate a report that can be compared to previous reports and identify any new apis.
This seems backwards and unsustainable but I've been unable to find another solution that would satisfy their requests. I've recommended deploying to dev first, then using a tracing tool to identify any api changes, or event subscriptions, but they insist on having the report before deploying.
I'm hoping for any advice on best practice to accomplish this.
Tracing and detecting version changes is definitely over engineering. Whats simpler like #zenwraight has mentioned, is to version your APIs. While tracing through services to explore the different versions and schema could be a potential solution, it requires a lot more investment upfront and if thats not the bread and butter of the company, I would rather use a vendor product that might support something like this.
If discovery is a mechanism that is needed, I would recommend something that publishes internal API docs using a tool like Swagger so that you can search if there's an API you can consume.
And finally to support moving to different versions, I would recommend having an API onboarding process for the services so that teams can notify other teams that are using specific versions their services are coming to the end of their lifecycle and they will need to migrate to newer ones.
I have seen the sample projects on your website for Dexie.Syncable such as sync-server and sync-client and they all seem to write to a datbase directly vs interacting with a web api. I am looking for a little help in where to get started beyond the examples on the website. The api I am trying to write a gateway for is dreamfactory
Also it looks like version 2 beta has had many improvements to Dexie.Syncable
I would recommend to build a new server-project based on either WebSocketSyncServer.js or the github repo of sync-server. However, I cannot give the details on how to call REST APIs instead of working directly towards database or memory. I would suggest using ES2016 async/await since your API calls are asynchronic.
Maybe you could try getting more help on https://github.com/nponiros/sync_server by filing an issue there.
My organization is tracking multiple Scrum projects in VersionOne. Each week, we use the Release Forecasting report for each project to create a management dashboard that indicates the health and expected completion date of each project. I would like to automate this. Do any of the VersionOne APIs allow for the execution of this report and retrieving the image that is generated?
There is not an endpoint specific to Release Forecasting. Nor is there an endpoint to generate the image. However, you can get to the underlying data via the existing API endpoints. For reporting, I recommend query.v1. The closest example is the query for burndown data. You would need to take Scope as the focus of the query, not Timebox.
You might also take a look at VersionOne's Reporting and Analytics. While that is not a coding or API way to get the reports, it might still automate the needs you have.
I was able to automate the retrieval of this report, but not through the V1 API. Through careful use of Fiddler and a C# script using WebClient to execute POST requests, it was possible. The resulting code is pretty fragile, though, since it isn't using the API.