I'm trying to change PURGE response headers in Varnish4
HTTP/1.1 200 Purged
Content-Type: text/html; charset=utf-8
Date: Fri, 02 Sep 2016 19:57:56 GMT
Retry-After: 5
Server: Varnish
X-Varnish: 163921
Content-Length: 241
Connection: keep-alive
I have modified "Server: Varnish" in vcl_recv, vcl_deliver. Which seems to be working with any other request except for PURGE.
I need to change Server header or at least add a custom response header
I can't find any documentation about it so I was wondering if anyone done it before or it is a hardcoded option.
You need to override the built-in synthetic response generated by Varnish when purging objects. This can be trivially implemented using some extra VCL:
...
sub vcl_purge {
return (synth(700, "Purged"));
}
sub vcl_synth {
if (resp.status == 700) {
set resp.status = 200;
set resp.http.Server = "ACME";
}
}
Related
I tried to download zip files from a remote service, but everytime after the program finish running, I only had a broken zip file with 512kb size. I had no idea what had happened there, any one could help me?
Code like below
Flux<DataBuffer> dataBufferFlux = WebClient.builder()
.build()
.post()
.uri(url)
.accept(MediaType.APPLICATION_OCTET_STREAM)
.bodyValue(jsonPayload)
.retrieve()
.bodyToFlux(DataBuffer.class);
DataBufferUtils.write(dataBufferFlux, zipFilePath, StandardOpenOption.CREATE, StandardOpenOption.WRITE)
.doOnError(t -> {
logger.error(t.getStackTrace());
})
.then(Mono.just(zipFilePath.toAbsolutePath().toString()));
the response headers like below
Transfer-Encoding: [chunked]
Connection: [keep-alive]
Date: [Thu, 30 Jun 2022 11:21:33 GMT]
Set-Cookie: [XSRF-TOKEN=e387daee-e52d-4b50-8c08-3a85831aa5eb; Path=/]
Content-Disposition: [attachment; filename="FSI0003208187.zip"]
X-Content-Type-Options: [nosniff]
X-XSS-Protection: [1; mode=block]
Cache-Control: [no-cache, no-store, max-age=0, must-revalidate]
Pragma: [no-cache]
Expires: [0]
x-request-id: [611bd5c7-2bf0-4174-bcd4-1257883739fa#8905198]
X-Kong-Upstream-Latency: [35]
X-Kong-Proxy-Latency: [1]
Via: [kong/2.6.0]
Environment details
OS: macOS Big Sur Version 11.6 (Apple M1 Chip)
Node.js version: v16.4.1
npm version: 7.23.0
#google-cloud/talent version: v4
Intruduction
I'm having a hard time to get the job search in Google Cloud Talent Solution to work.
I already can create/read/update, tenants/companies/jobs indicating that the credentials are ok.
But I don't find any jobs searching them.
Facts
Currently I have one job stored in Google Cloud Talent Solution.
This ist the job export, done with the Google Console:
{
"name":"projects/insurancepunk/tenants/75f8ac52-6e7c-4b00-9220-03771d25e9c5/jobs/135317048994472646",
"requisition_id":"f9bffe6e-3c8c-40d5-b3c3-672d30485745",
"title":"IT-Berater"
}
This is the JSON stringifyed request, passed to "searchJobs":
{
"parent":"projects/insurancepunk/tenants/75f8ac52-6e7c-4b00-9220-03771d25e9c5",
"searchMode":"JOB_SEARCH",
"requestMetadata":{"domain":"insurancepunk.com","sessionId":"8f47bbab-5c15-4bd9-9008-60f79030ab3b","userId":"vCobKcXPFdf6zlVibjnb"},
"jobQuery":{"query":"IT-Berater"}
}
As you can see the project id and the tenant id match the exported job.
This is my very simple code:
const talent = require('#google-cloud/talent').v4;
const client = new talent.JobServiceClient();
client.searchJobs(request)
.then(responses => {
const resources = responses[0];
for (const resource of resources) {
console.log(`Job summary: ${resource.jobSummary}`);
console.log(`Job title snippet: ${resource.jobTitleSnippet}`);
const job = resource.job;
console.log(`Job name: ${job.name}`);
console.log(`Job title: ${job.title}`);
}
})
.catch(err => {
console.error(err);
});
The code enters the then-path, but "responses" is empty.
Google OAuth Play Ground
When testing it in Google OAuth Play Ground, i get this results.
Googel OAuth Play Ground: https://developers.google.com/oauthplayground/
Google Talent Solution Scope: [https://www.googleapis.com/auth/jobs2
The Scope was found here:
https://cloud.google.com/talent-solution/job-search/docs/reference/rpc/google.cloud.talent.v4
Output from Google OAuth Play Ground:
Request:
POST /v4/projects/insurancepunk/tenants/75f8ac52-6e7c-4b00-9220-03771d25e9c5/jobs:search HTTP/1.1
Host: jobs.googleapis.com
Content-length: 277
Content-type: application/json
Authorization: Bearer ya29.a0ARrda...
{
"parent":"projects/insurancepunk/tenants/75f8ac52-6e7c-4b00-9220-03771d25e9c5",
"searchMode":"JOB_SEARCH",
"requestMetadata":{"domain":"insurancepunk.com","sessionId":"8f47bbab-5c15-4bd9-9008-60f79030ab3b","userId":"vCobKcXPFdf6zlVibjnb"},
"jobQuery":{"query":"IT-Berater"}
}
Response:
HTTP/1.1 200 OK
Content-length: 117
X-xss-protection: 0
X-content-type-options: nosniff
Transfer-encoding: chunked
Vary: Origin, X-Origin, Referer
Server: ESF
-content-encoding: gzip
Cache-control: private
Date: Tue, 30 Nov 2021 09:44:40 GMT
X-frame-options: SAMEORIGIN
Alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Content-type: application/json; charset=UTF-8
{
"metadata": {
"requestId": "0e7330e1-ee15-404e-85d2-b8174679583f:APAb7ITvUURH6nrwLrbLLBCB9Zg6NKPMfg=="
}
}
Listing companies works well!
Request:
GET /v4/projects/insurancepunk/tenants/75f8ac52-6e7c-4b00-9220-03771d25e9c5/companies HTTP/1.1
Host: jobs.googleapis.com
Content-length: 0
Authorization: Bearer ya2...
Response:
HTTP/1.1 200 OK
Content-length: 1124
X-xss-protection: 0
Content-location: https://jobs.googleapis.com/v4/projects/insurancepunk/tenants/75f8ac52-6e7c-4b00-9220-03771d25e9c5/companies
X-content-type-options: nosniff
Transfer-encoding: chunked
Vary: Origin, X-Origin, Referer
Server: ESF
-content-encoding: gzip
Cache-control: private
Date: Tue, 30 Nov 2021 10:08:49 GMT
X-frame-options: SAMEORIGIN
Alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
Content-type: application/json; charset=UTF-8
{
"companies": [
{
"displayName": "Pompadour GmbH",
"name": "projects/insurancepunk/tenants/75f8ac52-6e7c-4b00-9220-03771d25e9c5/companies/a6f34dd2-76f1-40e8-8175-1274f49f5977",
"headquartersAddress": "Am Burgweg 1, 97346 Iphofen",
"imageUri": "http://www.pompadour.info/bar",
"derivedInfo": {
"headquartersLocation": {
"locationType": "STREET_ADDRESS",
"postalAddress": {
"postalCode": "97346",
"regionCode": "DE",
"administrativeArea": "BY",
"addressLines": [
"Am Burgweg 1, 97346 Iphofen, Germany"
],
"locality": "Iphofen"
},
"radiusMiles": 6.892640659556388e-05,
"latLng": {
"latitude": 49.7102381,
"longitude": 10.254041
}
}
},
"externalId": "9a1ebd16-886c-40ac-ae0a-d5a4e288f867",
"websiteUri": "http://www.pompadour.info",
"hiringAgency": true
}
],
"metadata": {
"requestId": "5780a724-ca05-4f56-8a88-d74fdc04a24e:APAb7IS/J4Hs1KThU2G0nCZk5fOdBT3sJw=="
}
}
All hints are welcome.
Wow!
This don't quite smell like Artificial Intelligence...
My one and only job had the Title "IT-Berater".
Searching for "IT-Berater" returned an empty result set.
However, searching for "Berater" returned the job...
The results are equal no matter if I used the Node.js API ore the original Google HTTP-API...
I've spent some time trying to fix the elastic search bulk upload warning:
Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header
My request is below:
POST http://elasticserver/_bulk HTTP/1.1
Authorization: xxx
Content-Type: application/x-ndjson; charset=utf-8
Host: elasticserver
Content-Length: 8559
... new line delimited json content ...
And my valid response with 200 status is below:
HTTP/1.1 200 OK
Warning: 299 Elasticsearch-5.5.1-19c13d0 "Content type detection for rest requests is deprecated. Specify the content type using the [Content-Type] header." "Mon, 14 Aug 2017 00:46:21 GMT"
content-type: application/json; charset=UTF-8
content-length: 4183
{"took":5538,"errors":false,...}
By experimenting I discovered that the issue is in content type charset definition Content-Type: application/x-ndjson; charset=utf-8 and if I change it to Content-Type: application/x-ndjson I get no warning.
Is it an elastic search issue or I'm forming the request incorrectly?
The official documentation explicitly states that
When sending requests to this endpoint the Content-Type header should be set to application/x-ndjson.
The RestController source code also shows that they are ignoring the charset:
final String lowercaseMediaType = restRequest.header("Content-Type").toLowerCase(Locale.ROOT);
// we also support newline delimited JSON: http://specs.okfnlabs.org/ndjson/
if (lowercaseMediaType.equals("application/x-ndjson")) {
restRequest.setXContentType(XContentType.JSON);
return true;
}
We perform PUT request to our party using CXF JAX-RS client. Request body is empty.
A simple request invocation leads to server response with code 411.
Response-Code: 411
"Content-Length is missing"
Our party's REST-server requires Content-Length HTTP-header to be set.
We switched chunking off according to note about chunking but this did not solve the problem. The REST-server still answers with 411 error.
Here is our conduit configuration from cxf.xml file
<http-conf:conduit name="{http://myhost.com/ChangePassword}WebClient.http-conduit">
<http-conf:client AllowChunking="false"/>
</http-conf:conduit>
Line in the log confirms that execution of our request bound to our conduit configuration:
DEBUG o.a.cxf.transport.http.HTTPConduit - Conduit '{http://myhost.com/ChangePassword}WebClient.http-conduit' has been configured for plain http.
Adding Content-Length header explicitly also did not help.
Invocation.Builder builder = ...
builder = builder.header(HttpHeaders.CONTENT_LENGTH, 0);
A CXF Client's log entry confirms header setting, however when we sniffed packets, we have surprisingly found that header setting has been completely ignored by CXF client. Content-Length header was not sent.
Here is the log. Content-Length header is present:
INFO o.a.c.i.LoggingOutInterceptor - Outbound Message
---------------------------
ID: 1
Address: http://myhost.com/ChangePassword?username=abc%40gmail.com&oldPassword=qwerty123&newPassword=321ytrewq
Http-Method: PUT
Content-Type: application/x-www-form-urlencoded
Headers: {Accept=[application/json], client_id=[abcdefg1234567890abcdefg12345678], Content-Length=[0], Content-Type=[application/x-www-form-urlencoded], Cache-Control=[no-cache], Connection=[Keep-Alive]}
--------------------------------------
DEBUG o.apache.cxf.transport.http.Headers - Accept: application/json
DEBUG o.apache.cxf.transport.http.Headers - client_id: abcdefg1234567890abcdefg12345678
DEBUG o.apache.cxf.transport.http.Headers - Content-Length: 0
DEBUG o.apache.cxf.transport.http.Headers - Content-Type: application/x-www-form-urlencoded
DEBUG o.apache.cxf.transport.http.Headers - Cache-Control: no-cache
DEBUG o.apache.cxf.transport.http.Headers - Connection: Keep-Alive
And here is an output of the packet sniffer. Content-Length header is not present:
PUT http://myhost.com/ChangePassword?username=abc%40gmail.com&oldPassword=qwerty123&newPassword=321ytrewq HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Accept: application/json
client_id: abcdefg1234567890abcdefg12345678
Cache-Control: no-cache
User-Agent: Apache-CXF/3.1.8
Pragma: no-cache
Host: myhost.com
Proxy-Connection: keep-alive
Does anyone know how actually disable chunking?
Here is our code:
public static void main(String[] args)
{
String clientId = "abcdefg1234567890abcdefg12345678";
String uri = "http://myhost.com";
String user = "abc#gmail.com";
Client client = ClientBuilder.newBuilder().newClient();
WebTarget target = client.target(uri);
target = target.path("ChangePassword").queryParam("username", user).queryParam("oldPassword", "qwerty123").queryParam("newPassword", "321ytrewq");
Invocation.Builder builder = target.request("application/json").header("client_id", clientId).header(HttpHeaders.CONTENT_LENGTH, 0);
Response response = builder.put(Entity.form(new Form()));
String body = response.readEntity(String.class);
System.out.println(body);
}
Versions:
OS: Windows 7 Enterprise SP1
Arch: x86_64
Java: 1.7.0_80
CXF: 3.1.8
I had a very similar issue that I was not able to solve as you did by trying to turn off chunking.
What I ended up doing was setting the Content-Length to 1 and adding some white space " " as the body. For me it seemed that the proxy servers before the server application was rejected the request and by doing that got me past the proxy servers and the server was able to process the request as it was only operating based on the URL.
I use the google-api-services-calendar v3 for java.
I list without problems my calendarEntries. But the insertion of a new calendarEntry fails.
I use the same code as the samples :
CalendarListEntry newCal = new CalendarListEntry();
newCal.setId(calTitle);
newCal.setSummary(calTitle);
newCal.setTimeZone("Europe/Paris");
CalendarListEntry execute = null;
try {
execute = service.calendarList().insert(newCal).execute();
} catch (IOException e) {
e.printStackTrace();
}
The calendar is not created. In the logs I have :
CONFIG: {"id":"A_TEST","summary":"A_TEST","timeZone":"Europe/Paris"}
15 juin 2012 16:20:45 com.google.api.client.http.HttpRequest execute
CONFIG: -------------- REQUEST --------------
POST https://www.googleapis.com/calendar/v3/users/me/calendarList
Accept-Encoding: gzip
Authorization: <Not Logged>
User-Agent: Google-HTTP-Java-Client/1.10.2-beta (gzip)
Content-Type: application/json; charset=UTF-8
Content-Encoding: gzip
Content-Length: 69
CONFIG: Total: 60 bytes
CONFIG: {"id":"A_TEST","summary":"A_TEST","timeZone":"Europe/Paris"}
15 juin 2012 16:20:46 com.google.api.client.http.HttpResponse <init>
CONFIG: -------------- RESPONSE --------------
HTTP/1.1 404 Not Found
X-Frame-Options: SAMEORIGIN
Date: Fri, 15 Jun 2012 14:07:52 GMT
Content-Length: 120
Expires: Fri, 15 Jun 2012 14:07:52 GMT
X-XSS-Protection: 1; mode=block
Content-Encoding: gzip
Content-Type: application/json; charset=UTF-8
Server: GSE
Cache-Control: private, max-age=0
X-Content-Type-Options: nosniff
CONFIG: Total: 165 bytes
CONFIG: {
"error": {
"errors": [
{
"domain": "global",
"reason": "notFound",
"message": "Not Found"
}
],
"code": 404,
"message": "Not Found"
}
}
Any ideas?
RTFM!
I was not on the good uri. The correct insertion must be on https://www.googleapis.com/calendar/v3/calendars
and not https://www.googleapis.com/calendar/v3/users/me/calendarList.
The code is :
Calendar newCal = new Calendar();
newCal.setSummary(calTitle);
newCal.setTimeZone("Europe/Paris");
Calendar createdCalendar = null;
try {
createdCalendar = service.calendars().insert(newCal).execute();
} catch (Exception e){
e.printStackTrace();
}