Sample repro here: SubscriptionsFetcherTest
I am trying to test my integration with the Youtube API.
For that one i'm creating a mockHttpTransport with a mockResponse (or a mockRequest, the same issue presented below arises):
val mockHttpTransport = MockHttpTransport.Builder()
// .setLowLevelHttpRequest(mockRequest)
.setLowLevelHttpResponse(mockResponse)
.build()
val youtube = YouTube.Builder(
mockHttpTransport,
mockGoogleCredential.jsonFactory,
mockGoogleCredential
).build()
I want to get all my subscriptions, but they are more than 50 so I get a paged response ("nextPageToken": "CAUQAA"), so i need to make a second request to execute:
val request = youtube
.subscriptions()
.list(listOf("snippet", "contentDetails"))
.setMaxResults(5000)
.setMine(true)
.setFields("*")
.setPrettyPrint(true)
val responseWhichReturnsTheCorrectSubscriptionListPage: SubscriptionListResponse = request.execute()
println(responseWhichReturnsTheCorrectSubscriptionListPage)
val responseWhichThrowsIAE: SubscriptionListResponse = request.setPageToken("CAUQAA").execute()
println(responseWhichThrowsIAE)
Q1: why can't i return the same mock response multiple times? the second iteration throws java.lang.IllegalArgumentException: no JSON input found
Q2: How can i pass multiple different mockResponses to the transport, to simulate that i have to fetch multiple pages from youtube?
I have already tried searching on Stackoverflow, but I had little success: google-cloud-platform mock. (and many other searches)
I have already asked on google's github issues, but I got no response there either: https://github.com/googleapis/google-http-java-client/issues/1760
Related
I'm trying to make use of GraphQL API for Jira XRay Cloud (documentation here ) with the objective of solving the limitation of max 100 results established.
More specifically, I would like to be able to somehow retrieve all tests contained on a Test Plan, for which I'm using:
getTestPlan(issueId:"${test_plan.id}"){
issueId
tests(limit: 100) {
results {
issueId
jira(fields: ["key"])
}
}
}
}
However, if the Test Plan contains 130 tests, I am unable to get the remaining 30.
How would I ask graphQL to provide me with "the following 100 results" ?
I've attempted to set the request as tests(limit:100, after: 100), as well as including a pageInfo with hasNextPage, but to no avail - I guess it must be some definition on the specific graphQL endpoint, but I'm an absolute rookie on graphQL so I can't really tell.
Thanks for the help!
on the response of the first request you will have the following information available:
total, with the total number of entries available
start, the start of the next page of results
limit, the limit used in the request
So based on that information you should do the first request exactly as you have and the next one adding the "start" to it like below:
getTestPlan(issueId:"${test_plan.id}"){
issueId
tests(limit: 100 start:100) {
results {
issueId
jira(fields: ["key"])
}
}
}
}
Im testing a Corda 4 Cordapp and set up a spring web server to make api calls to my cordapps. I have one api called named ```get-all-contract1-states`` which does exactly what it says. It gets all of my contract1 states in the vault.
When I call this function, it does return the states, but also returns an excessive amount of repetitive metadata making the output for 1 state more than 600k lines long.
#GetMapping(value = "/get-contract1-states", produces = arrayOf(MediaType.APPLICATION_JSON_VALUE))
fun getContract1s() = rpcOps.vaultQueryBy(criteria = VaultQueryCriteria(status = Vault.StateStatus.ALL), paging = PageSpecification(DEFAULT_PAGE_NUM, 200), sorting = Sort(emptySet()), contractStateType = contract1State::class.java).states
Most of the repetitive metadata (which makes up about 85% of the 600k lines) is at the end of the Json regarding "zero":false,"one":false,"fieldSize":256,"fieldName":"SecP256R1Field". Are there any flags, options, or simply any way to get back a clean version of the contract without so much excess data. I only care about the variables from the contract, nothing more.
What you currently have will return you a collection of:
data class Page<out T : ContractState>(val states: List<StateAndRef<T>>,
val statesMetadata: List<StateMetadata>,
val totalStatesAvailable: Long,
val stateTypes: StateStatus,
val otherResults: List<Any>)
Hence why you're getting all the metadata. What you're after in this data object is states (which actually returns StateAndRef) and then just state within each.
The following code should get you what you're after:
#GetMapping(value = "/get-contract1-states", produces = arrayOf(MediaType.APPLICATION_JSON_VALUE))
fun getContract1s() = proxy.vaultQueryBy(criteria = QueryCriteria.VaultQueryCriteria(status =
Vault.StateStatus.ALL), paging = PageSpecification(DEFAULT_PAGE_NUM, 200),
sorting = Sort(emptySet()), contractStateType = IOUState::class.java).states.map { it.state.data }
Note: the key bit here is the mapping to state.data
I'm using jMeter 3.2 to write some tests. I have a CSV file with test account info. Each row contains login info for a user. Each user needs to request a token that is used on later requests.
My test plan:
The get token request retrieves a token. The login requests logs in the user and returns another token. Select customer card selects a customer and returns the final token. The code for the postprocesser is (I'm not experienced in this, so any advice is appreciated):
import org.json.simple.JSONObject;
import org.json.simple.parser.JSONParser;
// Check if our map already exists
if (props.get("map") == null) {
JSONObject obj = new JSONObject();
obj.put("${department}", new String(data));
log.info("Adding department to map. Department: ${department}. Token: " + new String(data));
props.put("map", obj.toJSONString());
} else {
// Retrieve the current map
map = props.get("map");
JSONParser parser = new JSONParser();
JSONObject jobj = (JSONObject) parser.parse(map);
// Add the new department (with it's token) to the map
jobj.put("${department}", new String(data));
log.info("Updating map for department. Department: ${department}. Token: " + new String(data));
props.put("map", jobj.toJSONString());
}
Attempt 1:
I'm setting up a once only controller to log in a user and retrieve the token.
Now lets say I have 10 lines in my CSV file but in my test I only want to use 3 users and loop 10 times. What happens is that 3 login requests are sent (one for each user). This works fine for the first iteration. At the 2. iteration the 3 threads will use row 4-6 which doesn't have a token and thereby fail.
Attempt 2:
I'm using an if controller to check whether the token has been set or not. I haven't got this working at all. I added a beanshell preprocessor to the controller where I attempt to retrieve the token. If it's null or empty I set the token variable to "". In the if controller I check for this value. But yeah. No luck yet.
Attempt 3
In Beanshell check if the token is created already. If not, call the test fragment that retrieves it. Unfortunately this seems not possible.
It might be worth noting that I store my tokens in a property, so that all threads can access it.
Please let me know if you need more information.
I figured out a solution. In essence what I tried to do is to store a token for each row in the data file.
I did this by creating a setUp Thread Group which is executed before other thread groups. In this I loop through the data and store a token for each. Now all other thread groups may access these as they run.
I couldn't find a way to change a column name, for a column I just created, either the browser interface or via an API call. It looks like all object-related API calls manipulate instances, not the class definition itself?
Anyone know if this is possible, without having to delete and re-create the column?
This is how I did it in python:
import json,httplib,urllib
connection = httplib.HTTPSConnection('api.parse.com', 443)
params = urllib.urlencode({"limit":1000})
connection.connect()
connection.request('GET', '/1/classes/Object?%s' % params, '', {
"X-Parse-Application-Id": "yourID",
"X-Parse-REST-API-Key": "yourKey"
})
result = json.loads(connection.getresponse().read())
objects = result['results']
for object in objects:
connection = httplib.HTTPSConnection('api.parse.com', 443)
connection.connect()
objectId = object['objectId']
objectData = object['data']
connection.request('PUT', ('/1/classes/Object/%s' % objectId), json.dumps({
"clonedData": objectData
}), {
"X-Parse-Application-Id": "yourID",
"X-Parse-REST-API-Key": "yourKEY",
"Content-Type": "application/json"
})
This is not optimized - you can batch 50 of the processes together at once, but since I'm just running it once I didn't do that. Also since there is a 1000 query limit from parse, you will need to do run the load multiple times with a skip parameter like
params = urllib.urlencode({"limit":1000, "skip":1000})
From this Parse forum answer : https://www.parse.com/questions/how-can-i-rename-a-column
Columns cannot be renamed. This is to avoid breaking an existing app.
If your app is still under development, you can just query for all the
objects in your class and copy the value of the old column to the new
column. The REST API is very useful for this. You may them drop the
old column in the Data Browser
Hope it helps
Yes, it's not a feature provided by Parse (yet). But there are some third party API management tools that you can use to rename the fields in the response. One free tool is called apibond.com
It's a work around, but I hope it helps
Now I know i can only dowload a string asynchronously in Windows Phone Seven, but in my app i want to know which request has completed.
Here is the scenario:
I make a certain download request using WebClient()
i use the following code for download completed
WebClient stringGrab = new WebClient();
stringGrab.DownloadStringCompleted += ClientDownloadStringCompleted;
stringGrab.DownloadStringAsync(new Uri(<some http string>, UriKind.Absolute));
i give the user the option of giving another download request if this request takes long for the user's liking.
my problem is when/if the two requests return, i have no method/way of knowing which is which i.e. which was the former request and which was second!
is there a method of knowing/sychronizing the requests?
I can't change the requests to return to different DownloadStringCompleted methods!
Thanks in Advance!
Why not do something like this:
void DownloadAsync(string url, int sequence)
{
var stringGrab = new WebClient();
stringGrab.DownloadStringCompleted += (s, e) => HandleDownloadCompleted(e, sequence);
stringGrab.DownloadStringAsync(new Uri(url, UriKind.Absolute));
}
void HandleDownloadCompleted(DownloadStringCompletedEventArgs e, int sequence)
{
// The sequence param tells you which request was completed
}
It is an interesting question because by default WebClient doesn't carry any unique identifiers. However, you are able to get the hash code, that will be unique for each given instance.
So, for example:
WebClient client = new WebClient();
client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted);
client.DownloadStringAsync(new Uri("http://www.microsoft.com", UriKind.Absolute));
WebClient client2 = new WebClient();
client2.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted);
client2.DownloadStringAsync(new Uri("http://www.microsoft.com", UriKind.Absolute));
Each instance will have its own hash code - you can store it before actually invoking the DownloadStringAsync method. Then you will add this:
int FirstHash = client.GetHashCode();
int SecondHash = client2.GetHashCode();
Inside the completion event handler you can have this:
if (sender.GetHashCode() = FirstHash)
{
// First completed
}
else
{
// Second completed
}
REMEMBER: A new hash code is given for every re-instantiation.
If the requests are essentially the same, rather than keep track of which request is being returned. Why not just keep track of if one has previously been returned? Or, how long since the last one returned.
If you're only interested in getting this data once, but are trying to allow the user to reissue the request if it takes a long time, you can just ignore all but the first successfully returned result. This way it doesn't matter how many times the user makes additional requests and you don't need to track anything unique to each request.
Similarly, if the user can request/update data from the remote service at any point, you could keep track of how long since you last got successfull data back and not bother updating the model/UI if you get another resoponse shortly after that. It'd be preferable to not make requests in this scenario but if you've got to deal with long delays and race conditions in responses you could use this technique and still keep the UI/data up to date within a threshold of a few minutes (or however long you specify).