Using jclouds to list containers in SAIO Openstack swift server not successful - blobstore

I have set up a SAIO server according to Openstack Swift's site: http://docs.openstack.org/developer/swift/development_saio.html#loopback-section
I'm using the default test account. I can curl to it using other machines using these commands:
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://x.x.x.x:8080/auth/v1.0
This gives me the token and storage url which I then use to GET/POST/etc.
'x.x.x.x' is the ip of the machine.
curl -X GET -i -H 'X-Auth-Token: {token}' http://x.x.x.x:8080/v1/AUTH_test/container-bdf7f288-31f9-4cc1-9ab4-f0705dda763f
I want to be able to work with this server using jclouds. However, I'm unable to do basic functions such as listing containers. I'm using the example provided here: http://jclouds.apache.org/guides/openstack/
I have my init method as this:
private void init() {
Iterable<Module> modules = ImmutableSet.<Module> of(
new SLF4JLoggingModule());
String api = "swift";
String identity = "test:tester"; // tenantName:userName
String password = "testing"; // demo account uses ADMIN_PASSWORD too
BlobStoreContext context = ContextBuilder.newBuilder(api)
.endpoint("http://x.x.x.x:8080/")
.credentials(identity, password)
.modules(modules)
.buildView(BlobStoreContext.class);
storage = context.getBlobStore();
swift = context.unwrap();
}
This is part of the console output:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
List Containers
org.jclouds.http.HttpResponseException: command: GET http://x.x.x.x:8080/v1.0 HTTP/1.1 failed with response: HTTP/1.1 412 Precondition Failed; content: [Bad URL]
at org.jclouds.openstack.swift.handlers.ParseSwiftErrorFromHttpResponse.handleError(ParseSwiftErrorFromHttpResponse.java:55)
at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:67)
at org.jclouds.http.internal.BaseHttpCommandExecutorService.shouldContinue(BaseHttpCommandExecutorService.java:180) ...
And this is the log in the proxy.log when I try to list containers:
Mar 13 11:40:42 minint-klnhv9g proxy-server: {requesting ip} {requesting ip} 13/Mar/2014/18/40/42 GET /v1.0 HTTP/1.0 412 - jclouds/1.7.1%20java/1.7.0_05 - - 7 - tx670f536e9c634dc0a69d3-005321fbaa - 0.0002 - - 1394736042.856692076 1394736042.856895924
I've tried searching for a solution for a few days now, but I have not found anything. Thank you very much!

Can you try appending the auth and version suffix to the endpoint, e.g.:
BlobStoreContext context = ContextBuilder.newBuilder(api)
.endpoint("http://x.x.x.x:8080/auth/v1.0")

Related

Getting error after entering a value in a text box in karate UI via Firefox

This issue is only when I invoke the scripts via firefox driver, after entering value in a text box getting an error and test is failing
configure driver = {type:'geckodriver' , executable:'C:\Users\dinesh\Downloads\geckodriver-v0.31.0-win64\geckodriver.exe'}
driver 'https://courses.ultimateqa.com/users/sign_in'
screenshot()
driver.maximize()
input("//*[#id='user[email]']","abc#gmail.com")
Error message
* input("//*[#id='user[first_name]']", 'welcome')
js failed:
>>>>
01: input("//*[#id='user[first_name]']", 'welcome')
<<<<
org.graalvm.polyglot.PolyglotException: Expected to find an object with property ['message'] in path $['value'] but found 'null'. This is not a json object according to the JsonProvider: 'com.jayway.jsonpath.spi.json.JsonSmartJsonProvider'.
- com.jayway.jsonpath.internal.path.PropertyPathToken.evaluate(PropertyPathToken.java:71)
- com.jayway.jsonpath.internal.path.PathToken.handleObjectProperty(PathToken.java:81)
- com.jayway.jsonpath.internal.path.PropertyPathToken.evaluate(PropertyPathToken.java:79)
- com.jayway.jsonpath.internal.path.RootPathToken.evaluate(RootPathToken.java:62)
- com.jayway.jsonpath.internal.path.CompiledPath.evaluate(CompiledPath.java:99)
- com.jayway.jsonpath.internal.path.CompiledPath.evaluate(CompiledPath.java:107)
- com.jayway.jsonpath.JsonPath.read(JsonPath.java:185)
Please help me the xpath is right, karate is writing the value in text box but after that it is failing
For me this code works, both with chromedriver and geckodriver. Of course these executables must be set in the PATH environment variable.
I added also the waitFor method to wait for the web element.
Feature: sample karate test script
for help, see: https://github.com/intuit/karate/wiki/IDE-Support
Background:
#* configure driver = { type: 'chromedriver', executable: 'chromedriver' }
* configure driver = { type: 'geckodriver', executable: 'geckodriver' }
#* def sleep = function(pause){ java.lang.Thread.sleep(pause) }
Scenario: test error
Given driver 'https://courses.ultimateqa.com/users/sign_in'
When waitFor("//*[#id='user[email]']").input('abc#gmail.com')
And waitFor("//*[#id='user[password]']").input('qwerty1234')
And waitFor("//input[#type='submit']").click()
* screenshot()
#* call sleep 3000

Possible reasons for groovy program running as kubernetes job dumping threads during execution

I have a simple groovy script that leverages the GPars library's withPool functionality to launch HTTP GET requests to two internal API endpoints in parallel.
The script runs fine locally, both directly as well as a docker container.
When I deploy it as a Kubernetes Job (in our internal EKS cluster: 1.20), it runs there as well, but the moment it hits the first withPool call, I see a giant thread dump, but the execution continues, and completes successfully.
NOTE: Containers in our cluster run with the following pod security context:
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
Environment
# From the k8s job container
groovy#app-271df1d7-15848624-mzhhj:/app$ groovy --version
WARNING: Using incubator modules: jdk.incubator.foreign, jdk.incubator.vector
Groovy Version: 4.0.0 JVM: 17.0.2 Vendor: Eclipse Adoptium OS: Linux
groovy#app-271df1d7-15848624-mzhhj:/app$ ps -ef
UID PID PPID C STIME TTY TIME CMD
groovy 1 0 0 21:04 ? 00:00:00 /bin/bash bin/run-script.sh
groovy 12 1 42 21:04 ? 00:00:17 /opt/java/openjdk/bin/java -Xms3g -Xmx3g --add-modules=ALL-SYSTEM -classpath /opt/groovy/lib/groovy-4.0.0.jar -Dscript.name=/usr/bin/groovy -Dprogram.name=groovy -Dgroovy.starter.conf=/opt/groovy/conf/groovy-starter.conf -Dgroovy.home=/opt/groovy -Dtools.jar=/opt/java/openjdk/lib/tools.jar org.codehaus.groovy.tools.GroovyStarter --main groovy.ui.GroovyMain --conf /opt/groovy/conf/groovy-starter.conf --classpath . /tmp/script.groovy
groovy 116 0 0 21:05 pts/0 00:00:00 bash
groovy 160 116 0 21:05 pts/0 00:00:00 ps -ef
Script (relevant parts)
#Grab('org.codehaus.gpars:gpars:1.2.1')
import static groovyx.gpars.GParsPool.withPool
import groovy.json.JsonSlurper
final def jsl = new JsonSlurper()
//...
while (!(nextBatch = getBatch(batchSize)).isEmpty()) {
def devThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = dev + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
devResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
def stgThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = stg + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
stgResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
devThread.join()
stgThread.join()
}
Dockerfile
FROM groovy:4.0.0-jdk17 as builder
USER root
RUN apt-get update && apt-get install -yq bash curl wget jq
WORKDIR /app
COPY bin /app/bin
RUN chmod +x /app/bin/*
USER groovy
ENTRYPOINT ["/bin/bash"]
CMD ["bin/run-script.sh"]
The bin/run-script.sh simply downloads the above groovy script at runtime and executes it.
wget "$GROOVY_SCRIPT" -O "$LOCAL_FILE"
...
groovy "$LOCAL_FILE"
As soon as the execution hits the first call to withPool(poolSize), there's a giant thread dump, but execution continues.
I'm trying to figure out what could be causing this behavior. Any ideas 🤷🏽‍♂️?
Thread dump
For posterity, answering my own question here.
The issue turned out to be this log4j2 JVM hot-patch that we're currently leveraging to fix the recent log4j2 vulnerability. This agent (running as a DaemonSet) patches all running JVMs in all our k8s clusters.
This, somehow, causes my OpenJDK 17 based app to thread dump. I found the same issue with an ElasticSearch 8.1.0 deployment as well (also uses a pre-packaged OpenJDK 17). This one is a service, so I could see a thread dump happening pretty much every half hour! Interestingly, there are other JVM services (and some SOLR 8 deployments) that don't have this issue 🤷🏽‍♂️.
Anyway, I worked with our devops team to temporarily exclude the node that deployment was running on, and lo and behold, the thread dumps disappeared!
Balance in the universe has been restored 🧘🏻‍♂️.

How to set up Karate browser capabilities acceptInsecureCerts:true for geckodriver [duplicate]

This question already has an answer here:
How to fix - `ERROR com.intuit.karate - http request failed`
(1 answer)
Closed 1 year ago.
I tried this way to set up the capabilities of my geckodriver for my karate tests.
I am using karate.version 0.9.6
Here is the geckodriver driver: 64bit windows: https://github.com/mozilla/geckodriver/releases/tag/v0.29.1
firefox Version 89.0.2 (64-bit)
def session = { capabilities: { acceptInsecureCerts:true, browserName: 'firefox', proxy: { proxyType: 'manual', httpProxy: '127.0.0.1:8080' } } }
configure driver = { type: 'geckodriver', showDriverLog: true , executable: 'driver/geckodriver.exe', webDriverSession: '#(session)' }
However, it obviously not picking up my settings.
Here is my log:
1 > User-Agent: Apache-HttpClient/4.5.12 (Java/1.8.0_41)
{"capabilities":{"acceptInsecureCerts":true,"browserName":"firefox","proxy":{"proxyType":"manual","httpProxy":"127.0.0.1:8080"}}}
13:25:13.121 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - 1626121513121 mozrunner::runner INFO Running command: "C:\\Program Files\\Mozilla Firefox\\firefox.exe" "--marionette" "-foreground" "-no-remote" "-profile" "C:\\Users\\xxxxx\\AppData\\Local\\Temp\\rust_mozprofiledFOSxn"
13:25:16.428 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - 1626121516428 Marionette INFO Marionette enabled
13:25:20.065 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - console.warn: SearchSettings: "get: No settings file exists, new profile?" (new NotFoundError("Could not open the file at C:\\Users\\xxxxx\\AppData\\Local\\Temp\\rust_mozprofiledFOSxn\\search.json.mozlz4", (void 0)))
13:25:20.368 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - console.error: Region.jsm: "Error fetching region" (new TypeError("NetworkError when attempting to fetch resource.", ""))
13:25:20.369 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - console.error: Region.jsm: "Failed to fetch region" (new Error("NO_RESULT", "resource://gre/modules/Region.jsm", 419))
13:25:20.960 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - 1626121520961 Marionette INFO Listening on port 58400
13:25:21.071 [ForkJoinPool-1-worker-1] DEBUG com.intuit.karate - response time in milliseconds: 7997.52
1 < 200
1 < cache-control: no-cache
1 < content-length: 712
1 < content-type: application/json; charset=utf-8
1 < date: Mon, 12 Jul 2021 20:25:13 GMT
{"value":{"sessionId":"b17123ef-1426-45d2-827b-adbc35b02e46","capabilities":{"acceptInsecureCerts":false,"browserName":"firefox","browserVersion":"89.0.2","moz:accessibilityChecks":false,"moz:buildID":"20210622155641","moz:geckodriverVersion":"0.29.1","moz:headless":false,"moz:processID":36360,"moz:profile":"C:\\Users\\wli2\\AppData\\Local\\Temp\\rust_mozprofiledFOSxn","moz:shutdownTimeout":60000,"moz:useNonSpecCompliantPointerOrigin":false,"moz:webdriverClick":true,"pageLoadStrategy":"normal","platformName":"windows","platformVersion":"10.0","setWindowRect":true,"strictFileInteractability":false,"timeouts":{"implicit":0,"pageLoad":300000,"script":30000},"unhandledPromptBehavior":"dismiss and notify"}}}
My purpose is to circle around this security check page.
enter image description here
Also, even if I tried to click that button in that security check page, my script is not able to get the buttons from the dom tree when I do the following.
And click('button[id=advancedButton]')
And click('button[id=exceptionDialogButton]')
it might be related with this : KarateUI: How to Handle SSL Certificate during geckodriver configuration? I added the alwaysMatch in and it is able to pick up the capabilities.
* def session = { capabilities: {alwaysMatch:{ acceptInsecureCerts:true, browserName: 'firefox' }}}
* configure driver = { type: 'geckodriver', showDriverLog: true , executable: 'driver/geckodriver.exe', webDriverSession: '#(session)' }
This is an area that may require you to do some research and contribute findings back to the community. Finally Karate passes the capabilities you define "as-is" to the driver. One thing that you should look at is if any command-line sessions should be passed to geckodriver - for example for Chrome, I remember there is some flag for ignoring these security errors. Note that you can use the addOptions flag in the Karate driver options.

Receiving error in AWS Secrets manager awscli for: Version "AWSCURRENT" not found when deployed via Terraform

Overview
Create a aws_secretsmanager_secret
Create a aws_secretsmanager_secret_version
Store a uniquely generated string as that above version
Use local-exec provisioner to store the actual secured string using bash
Reference that string using the secretsmanager resource in for example, an RDS instance deployment.
Objective
Keep all plain text strings out of remote-state residing in a S3 bucket
Use AWS Secrets Manager to store these strings
Set once, retrieve by calling the resource in Terraform
Problem
Error: Secrets Manager Secret
"arn:aws:secretsmanager:us-east-1:82374283744:secret:Example-rds-secret-fff42b69-30c1-df50-8e5c-f512464a4a11-pJvC5U"
Version "AWSCURRENT" not found
when running terraform apply
Question
Why isn't it moving the AWSCURRENT version automatically? Am I missing something? Is my bash command wrong? The value does not write to the secret_version, but it does reference it correctly.
Look in main.tf code, which actually performs the command:
provisioner "local-exec" {
command = "bash -c 'RDSSECRET=$(openssl rand -base64 16); aws secretsmanager put-secret-value --secret-id ${data.aws_secretsmanager_secret.secretsmanager-name.arn} --secret-string $RDSSECRET --version-stages AWSCURRENT --region ${var.aws_region} --profile ${var.aws-profile}'"
}
Code
main.tf
data "aws_secretsmanager_secret_version" "rds-secret" {
secret_id = aws_secretsmanager_secret.rds-secret.id
}
data "aws_secretsmanager_secret" "secretsmanager-name" {
arn = aws_secretsmanager_secret.rds-secret.arn
}
resource "random_password" "db_password" {
length = 56
special = true
min_special = 5
override_special = "!#$%^&*()-_=+[]{}<>:?"
keepers = {
pass_version = 1
}
}
resource "random_uuid" "secret-uuid" { }
resource "aws_secretsmanager_secret" "rds-secret" {
name = "DAL-${var.environment}-rds-secret-${random_uuid.secret-uuid.result}"
}
resource "aws_secretsmanager_secret_version" "rds-secret-version" {
secret_id = aws_secretsmanager_secret.rds-secret.id
secret_string = random_password.db_password.result
provisioner "local-exec" {
command = "bash -c 'RDSSECRET=$(openssl rand -base64 16); aws secretsmanager put-secret-value --secret-id ${data.aws_secretsmanager_secret.secretsmanager-name.arn} --secret-string $RDSSECRET --region ${var.aws_region} --profile ${var.aws-profile}'"
}
}
variables.tf
variable "aws-profile" {
description = "Local AWS Profile Name "
type = "string"
}
variable "aws_region" {
description = "aws region"
default="us-east-1"
type = "string"
}
variable "environment" {}
terraform.tfvars
aws_region="us-east-1"
aws-profile="Example-Environment"
environment="dev"
The error likely isn't occuring in your provisioner execution per se, because if you remove the provisioner block the error still occurs on apply--but confusingly only the first time after a destroy.
Removing the data "aws_secretsmanager_secret_version" "rds-secret" block as well "resolves" the error completely.
I'm guessing there is some sort of config delay issue here...but adding a 20 second delay provisioner to the aws_secretsmanager_secret.rds-secret resource block didn't help.
And the value from the data block can be successfully output on subsequent apply runs, so maybe it's not just timing.
Even if you resolve the above more basic issue, it's likely your provisioner will still be confusing things by modifying a resource that Terraform is trying to manage in the same run. I'm not sure there's a way to get around that except perhaps by splitting into two separate operations.
Update:
It turns out that on the first run the data sources are read before the aws_secretsmanager_secret_version resource is created. Just adding depends_on = [aws_secretsmanager_secret_version.rds-secret-version] to the data "aws_secretsmanager_secret_version" block resolves this fully and makes the interpolation for your provisioner work as well. I haven't tested the actual provisioner.
Also you may need to consider this (which I take to not always apply to 0.13):
NOTE: In Terraform 0.12 and earlier, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using depends_on with data resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses depends_on with a data resource can never converge. Due to this behavior, we do not recommend using depends_on with data resources.

nativescript paytm plugin giving 404 Not Found Error nginx/1.6.2

I'm trying to integrate #nstudio/nativescript-paytm plugin into my app.
as a part the first step i have written code to generate checksum. now i'm getting checksum. if i passed this to further steps as I'm getting below error.
Error in console
chromium: [INFO:library_loader_hooks.cc(50)] Chromium logging enabled: level = 0, default verbosity = 0
06-22 07:31:56.479 2762 2762 I cr_BrowserStartup: Initializing chromium process, singleProcess=true
chromium: [ERROR:filesystem_posix.cc(89)] stat /data/user/0/org.nativescript.demo/cache/WebView/Crashpad: No such file or directory (2)
chromium: [ERROR:filesystem_posix.cc(62)] mkdir /data/user/0/org.nativescript.demo/cache/WebView/Crashpad: No such file or directory (2)
My order param is
var order = {
MID: "V************3",
ORDER_ID: "order1",
CUST_ID: "cust123",
INDUSTRY_TYPE_ID: "Retail",
CHANNEL_ID: "WEB",
TXN_AMOUNT: "100.12",
WEBSITE: "WEBSTAGING",
CALLBACK_URL: "https://pguat.paytm.com/paytmchecksum/paytmCallback.jsp",
CHECKSUMHASH: "NDspZhvSHbq44K3A9Y4daf9En3l2Ndu9fmOdLG+bIwugQ6682Q3JiNprqmhiWAgGUnNcxta3LT2Vtk3EPwDww8o87A8tyn7/jAS2UAS9m+c="
};
I'm using github sample of that plugin(javascript version i.e demo) i haven't modified the code.

Resources