Google Civic Api - electionquery - google-civic-information-api

Hello Support Engineer Team,
I am trying to use the GET https://www.googleapis.com/civicinfo/v2/elections
route to inquire for respective elections and by retrieving the electionId, I am using Elections: voterInfoQuery route I am trying to gather Contest and candidate information
Two questions:
If I am running the election query now as of 11/27/2020, It only shows me two elections and I feel it is missing some of information like Georgia Runoff election and all.
How can I query the election route to gather information for past elections like elections which just got finished (Nov 3, 2020) ?

Related

How to decompose a monolith into microservices by business capability? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 15 days ago.
Improve this question
I have a monolith application that uses one database, and in my company, we decide to rewrite the application and use microservices in the backend.
At this time, we decided NOT to split the database because other applications and processes are using it, and it takes two years to change.
The difficulty in the process is to decompose and identify the right microservices.
I'll try to explain our system by start describing the UI. Please read carefully because I am trying to explain it in detail.
The system displays the stock market data. The Company or Fund or Fund manager in the market is posting everyday reports about the company's activities like status, information for investors, and more.
"breaking announcement" page
displays a list of today's priority reports. Each row contains the subject from the pdf document (the report) that the company is publishing and the company that belongs to the report:
When the user clicks on the row, we redirect to "report page" and which contains the report details:
In the database, we have entities such report, company, company_report, event, public_offers, upcoming_offering, and more.
So to get the list, we run an inner join query like this:
Select ... From report r inner join
company_report cr on r.reportid=cr.reportid
inner join company c on cr.company_cd=c.company_cd
Where ....
Most of our server endpoints are not changing anything but are only used to retrieve the data.
So I'll create this endpoint /reports/breaking-announcement to get the list, and it returns an object like that:
[{ reportId, subject, createAt, updateAt, pdfUrl, company: { id, name } }]
today's companies report page acts like "breaking announcement" page. but the page displays all the reports from today (not necessarily with priority).
disclosures are reports
On this page, we also have a search to get all reports by cretiria for example to get reports by company name. to do that we have autocomplete so the user types the company name or id.
In other to do that we think it should be API endpoint /companies/by-autocomplete and the response will [{ companyId, companyName, isCompany }].
eft page same as before, but this time we display the Funds report's (not a companys reports).
The list contained the fund name and the subject of the report. each click on the row leads to the report detail page (same page).
On this page we have a search by criteria such date-from date-to, name or id of the funds by autocomplete. and endpoint (/funds/by-autocomplete returns [{ fundId, fundName, ...}]).
foreign etf page same as before, list of items. each item is like before:
<fund name>
<subject of the report>
The query is different.
Okay, this was a very long description. thank you for reading.
Now I want to detect what are the microservices for this application.
I endup with:
Report microservice - responsible for getting and handling all the reports in the system.
which have a endpoints like getall, getbyid, get like getbreakingannouncement, getcompanytodayreports, getfunds, getforeignfunds. the report microservice will make a request to company or funds microservice to join the data from the company and build to the response.
company microservice:
handle all companies data. I mean endpoints such getall, getByIds (for report service), getByAutocomplete.
funds microservice:
handle all funds data. I mean endpoints such getall, getByIds (for report service), getByAutocomplete.
There are other services, such as a notification service or email service. but those are not business services. I want to split up my business logic into microservices in order to deploy and maintain them easily.
I'm not sure I decomposing right. maybe I do. but is fit the microservice ideas? it's fit the Pattern: Decompose by business capability
? if not what are the business capability in my system?
I don't think a query-oriented decomposition of your current application monolith will lead to a good microservice (MS) design. Two of your proposed microserivces have the same end-point query API which suggests to me that you are viewing your first-generation microservices as just entity-servers.
Your idea to perform joins on cross MS query operations indicates these first gen "microservices" are closely coupled and hence fall short of a genuine MS architecture.
One technique to verify an MS design is to ask yourself, "how would the whole system cope if one MS is unavailable for 3 minutes?". Solving that design challenge leads down a path towards decoupled message-base interactions between the microservices. And this in turn leads to interactions between Microservices being expressed as business operations where one MS raises messages that trigger a mutation in the state of another MS.
Maybe you should reduce the scope of your MS ambitions and instead look at Schema Stitching in GraphQL. Reading between the lines of your question I think a more realistic first step towards a distributed system would be to create specialised query services with a GraphQL endpoint.
At this time, we decided NOT to split the database because other applications and processes are using it, and it takes two years to change.
I'll try to stop you right here. In general case shared database is a huge antipattern in microservices architecture and should be avoided as much as possible. There are multiple problems here - less transparent dependencies between services which can cause high coupling with all the consequences in development and deployment, increasing chance to eventually end up with distributed monolith instead of microservices, etc.
Other applications and processes using it should not stop you from moving away from it - there are things which allow to mitigate that - you just sync data between services and "legacy" database (asynchronously using basically the same approaches like you will use in your microservices - for example transaction log tailing for example using something like debezium). It have it's own costs but I would argue that it is usually better to pay them upfront then have to pay bigger percentages on the tech debt.
I endup with: ....
I would argue that this split looks more like decomposition by subdomain then by business capability. Which is actually can be quite fine and suits microservices architecture also.
Based on your description I see at least the following business capabilities in your system that can be defined:
View (manage?) breaking announcements
View (manage?) reports
Search (reports?)
Potentially "today's reports" and "Funds reports" can be considered as separate business capabilities.
I want to split up my business logic into microservices in order to deploy and maintain them easily.
Then again - I highly recommend to reconsider not moving away from shared database.
I'm not sure I decomposing right
Without whole overview of the system including amount of data, data flows, resources available for development and competences in the teams, amount of incoming new business requirements, potential vectors of change, etc. it is hard to actually tell.
P.S.
Note that despite the microservices architecture having a lot of popularity it is not always a right solution for a concrete project to go full-blown microservices. If you have quite small team and/or do not handle high loads/large amount of data with various access patterns then potentially you do not need microservices. You still can leverage a lot of approaches used in the microservices architecture though.

Indexing staking/rewards events for NEAR blockchain

I want to create an app to have a detailed info about historical stake and reward changes for each block. Can I track every delegation events that contain any stake balance changes of delegator/validator? Including information like:
delegator address
validator address
amount of tokens that got delegated, undelegated or receive rewards
I found this contract. And then I tried decode transaction's actions and receipts. I still can not find info about amount.
For example this transaction contain unstake_all method. I tried using Near REST API or Postgres DB like
postgres://public_readonly:nearprotocol#mainnet.db.explorer.indexer.near.dev/mainnet_explorer
But, it does not include info about amount but explorer does:
#ojosdepez.near unstaking 211362599667478202066742666. Spent 186315320307823908119982990 staking shares. Total 211362599667478202066742667 unstaked balance and 0 staking shares
Contract total staked balance is 18374491513732210121091349309226. Total number of shares 16197043740284605773282183202762
So can I somehow get these logs using REST API or Postgres and are these logs reliable source? Or if there is any other method to find staking/reward amount info?
First of all
But, it does not include info about the amount but explorer does:
#ojosdepez.near unstaking 211362599667478202066742666. Spent 186315320307823908119982990 staking shares. Total 211362599667478202066742667 unstaked balance and 0 staking shares
Contract total staked balance is 18374491513732210121091349309226. Total number of shares 16197043740284605773282183202762
Explorer queries the RPC and shows you the logs from ExecutionOutcome.
In the PostgreSQL database for Indexer for Explorer, we don't store logs, so you can't find them there.
To have a detailed info about historical stake and reward changes for each block, I think you should index the blockchain by yourself to be sure everything is calculated as you expected.
In order to do this, you'd need to build an indexer. Happily, we're releasing an MVP (yet completely working solution) of NEAR Lake Framework which is a microframework to build indexers but even easier than it is done before now.
Please, have a look at the example project https://github.com/near/near-lake-raw-printer which basically prints the data from each block. Refer to this comment as an example of the structure that you can receive for each block (StreamerMessage) https://github.com/near/near-lake/issues/1#issuecomment-1035285658
So the main idea is to start indexing from the block where rewards became available (Phase 2) and analyze each block, transaction, and receipt related to staking/unstaking so you can perform your calculations and record the info about historical stake and reward changes.

Data Dependency Among Microservices

In my microservices architecture, I have a bunch of services A, B, C, D etc.
For ex: Service A is responsible for managing students. Service B is responsible for managing assessments, the student take.
Service B stores the studentid in the tables for reference. However when I have to query for all the assessments taken in a given time period, Service B has to call Service A to get the student name. Because the client app wants the name. not the id.
I see a lot of network calls among services because of this. So I was thinking Service A could raise an event whenever a new student is registering. Service B will consume the event and stores student info in its db. (same for student name update as well).
Questions:
Is this a bad practice? What are the pros and cons of this approach?
Feel free to suggest any alternatives.
It is good to allow some data duplication across the services and you can do it many many different ways.
One option is to having Service A publishing an event when a new student is registered.
One alternative (That might be simpler) is that when you create a new assessment against Service B, then you provide the username as part of the CreateAssessment command. In this way you don't need to publish any events between the two services when a new user is created.
Publishing events and replicating data into each service's database is a totally reasonable approach to minimizing network calls. I think you might find my answer to a similar question helpful as well (option 1 is the same as what you described):
https://stackoverflow.com/a/57791951/1563240

Is there a way to get train lines that connect to a given Place of type 'train_station' [duplicate]

I've seen that you can retrieve nearby subway stations for a location using the Google Maps Places API, as explained here:
Google Maps: Retrieve nearby subway station's latitude and longtitude?
But in addition to that data, I would also like to retrieve the subway lines available at that station. Is this possible?
Currently the Places API doesn't expose this information for transit stations. There is a feature request in Google issue tracker to make it possible to retrieve lines numbers for each stop. You can find this feature request at
https://issuetracker.google.com/issues/35827961
Please feel free to star this feature request to express your interest and subscribe to further notifications from Google.

Access and scheduling of FHIR Questionnaire resource

I am trying to understand how to use the FHIR Questionnaire resource, and have a specific question regarding this.
My project is specifically regarding how a citizen in our country could be responding to Questionnaires via a web app, which are then submitted to the FHIR server as QuestionnaireAnswers, to be read/analyzed by a health professional.
A FHIR-based system could have lots of Questionnaires (Qs), groups of Qs or even specific Qs would be targeted towards certain users or groups of users. The display of the questionnare to the citizen could also be based on a Care-plan of a sort, for example certain Questionnaires needing filling-in in the weeks after surgery. The Questionnaires could also be regular ones that need to be filled in every day or week permanently, to support data collection on the state of a chronic disease.
What I'm wondering is if FHIR has a resource which fits into organizing the 'logistics' of displaying the right form to the right person. I can see CarePlan, which seems to partly fit. Or is this something that would typically be handled out-of-FHIR-scope by specific server implementations?
So, to summarize:
Which resource or mechanism would a health professional use to set up that a patient should answer certain Questionnaires, either regularly or as part of for example a follow-up after a surgery. So this would include setting up the schedule for the form(s) to be filled in, and possibly configure what would happen if the form wasn't filled in as required.
Which resource (possibly the same) or mechanism would be used for the patient's web app to retrieve the relevant Questionnaire(s) at a given point in time?
At the moment, the best resource for saying "please capture data of type X on schedule Y" would be DiagnosticOrder, though the description probably doesn't make that clear. (If you'd be willing to click the "Propose a change" link and submit a change request for us to clarify, that'd be great.) If you wanted to order multiple questionnaires, then CarePlan would be a way to group that.
The process of taking a complex schedule (or set of schedules) and turning that into a simple list of "do this now" requests that might be more suitable for a mobile application to deal with is scheduled for DSTU 2.1. Until then, you have a few options for the mobile app:
- have it look at the CarePlan and complex DiagnosticOrder schedule and figure things out itself
- have a server generate a List of mini 1-time DiagnosticOrders and/or Orders identifying the specific "answer" times
- roll your own mechanism using the Other/Basic resource
Depending on your timelines, you might want to stay tuned to discussions by the Patient Care and Orders and Observations work groups as they start dealing with the issues around workflow management starting next month in Atlanta.

Resources