Disable info logs during testing a buffalo application - go

I am unable to disable INFO logs during testing.
Is there a way to do so?
Thanks.

Set the buffalo.Options.Logger.Out to ioutil.Discard, you might have to create an instance of a logger to do it:
import (
"github.com/gobuffalo/logger"
"ioutil"
// etc.
)
var noopLogger logger.Logrus
noopLogger.Out = ioutil.Discard
noopLogger.SetOutput(ioutil.Discard) // can't remember which one you need to do
buffalo.Options.Logger = noopLogger

Related

Using go-kit logger api missing methods

I want to use the logger from go kit repository and I saw that
that the author provided also logrus API/factory , while trying to test it with some common API functionality of logrus like ,withFields and error / info / panic etc
I couldn't use them only log
Any idea how can I add the missing log functionality ?
logrus.WithField API.
this is what I miss
log.WithFields(log.Fields{
"animal": "walrus",
}).Info("A walrus appears")
and also the info / error / debug etc
This is what I've tried
package main
import (
log "github.com/go-kit/kit/log/logrus"
"github.com/sirupsen/logrus"
)
func main() {
logrusLogger := logrus.New()
logrusLogger.Formatter = &logrus.JSONFormatter{TimestampFormat: "02-01-2006 15:04:05"}
logger := log.NewLogrusLogger(logrusLogger)
logger.Log("hello", "world”) //working
logger.WithFields( //doesnt work
logger.Info( //doesnt work
}
The logger is type logrus but I cannot use withFields OR info/error/debug etc, any idea what am I missing here?
as the log kit create some factory is there a way to use the logrus api ?
It's because log.NewLogrusLogger() creates the unexported logruslogger which has only one method Log (satisfying the log.Logger interface). It doesn't support the other methods from logrus itself.
This Log method can take arguments in key value pairs and put them in logrus.Fields while logging. So if you do Log("hello", "world"), it would set hello field's value to world. But this wouldn't work for level or other features.
However, since logrus.FieldLogger is embedded in the implementation of the logruslogger, we can assert our logger to behave like logrus.FieldLogger and then do this:
package main
import (
log "github.com/go-kit/kit/log/logrus"
"github.com/sirupsen/logrus"
)
func main() {
logrusLogger := logrus.New()
logrusLogger.Formatter = &logrus.JSONFormatter{TimestampFormat: "02-01-2006 15:04:05"}
logger := log.NewLogrusLogger(logrusLogger).(logrus.FieldLogger)
logger.Error("Hello")
logger.Warn("Warning you")
logger.WithField("good", "bad").Infoln("is it good or bad?")
}
I hope this helps. But since they only exposed the Log method, there might be conscious design decisions behind those. You may keep using just Log or if you want more flexibility, I would suggest setting up your own logger (using logrus) instead of what I just did above. That would be a cleaner approach IMO.

Jmeter: how to initialise header manager element globally

I wanted to use the same set of headers in multiple jmx files. So I wanted to initialise it once and have to use it across my jmx files.
Can anyone help me in meeting my requirement? Thanks in advance.
That’s not possible.
To be able to apply a Header Manager to all plan, it should have the largest scope but using Include or Module controller means reduced scope.
Thanks to scope stil, you can set your Header Manager as child of test plan and it will apply to whole requests.
You could use properties and __P function to make those configurable in user.properties
You can do this as follows:
Create a CSV file called headers.csv to hold your headers like:
header-1-name,header-1-value
header-2-name,header-2-value
and store it in "bin" folder of your JMeter installation
Add empty HTTP Header Manager to the top level of your Test Plan
Add setUp Thread Group to your Test Plan
Add JSR223 Sampler to the setUp Thread Group
Put the following code into "Script" area:
import org.apache.jmeter.protocol.http.control.Header
import org.apache.jmeter.protocol.http.control.HeaderManager
import org.apache.jmeter.threads.JMeterContext
import org.apache.jmeter.threads.JMeterContextService
import org.apache.jorphan.collections.SearchByClass
SampleResult.setIgnore()
def engine = ctx.getEngine()
def testPlanTree = org.apache.commons.lang3.reflect.FieldUtils.readDeclaredField(engine, "test", true)
def headerManagerSearch = new SearchByClass<>(HeaderManager.class)
testPlanTree.traverse(headerManagerSearch)
def headerManagers = headerManagerSearch.getSearchResults()
headerManagers.any { headerManager ->
new File('headers.csv').readLines().each { line ->
def values = line.split(',')
headerManager.add(new Header(values[0], values[1]))
}
}
If you want you can "externalize" points 3 and 4 via Test Fragment

error spark-shell, falling back to uploading libraries under SPARK_HOME

I'm trying to connect a spark-shell amazon hadoop, but I esart all the time giving the following error and do not know how to fix it or configure what is missing.
spark.yarn.jars, spark.yarn.archive
spark-shell --jars /usr/share/aws/emr/ddb/lib/emr-ddb-hadoop.jar
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
16/08/12 07:47:26 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
16/08/12 07:47:28 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Thx!!!
Error1
I'm trying to run a SQL query, something totally simple as:
val sqlDF = spark.sql("SELECT col1 FROM tabl1 limit 10")
sqlDF.show()
WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Error2
Then I try to run a script scala, something simple collected in:
https://blogs.aws.amazon.com/bigdata/post/Tx2D93GZRHU3TES/Using-Spark-SQL-for-ETL
import org.apache.hadoop.io.Text;
import org.apache.hadoop.dynamodb.DynamoDBItemWritable
import com.amazonaws.services.dynamodbv2.model.AttributeValue
import org.apache.hadoop.dynamodb.read.DynamoDBInputFormat
import org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.io.LongWritable
import java.util.HashMap
var ddbConf = new JobConf(sc.hadoopConfiguration)
ddbConf.set("dynamodb.output.tableName", "tableDynamoDB")
ddbConf.set("dynamodb.throughput.write.percent", "0.5")
ddbConf.set("mapred.input.format.class", "org.apache.hadoop.dynamodb.read.DynamoDBInputFormat")
ddbConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")
var genreRatingsCount = sqlContext.sql("SELECT col1 FROM table1 LIMIT 1")
var ddbInsertFormattedRDD = genreRatingsCount.map(a => {
var ddbMap = new HashMap[String, AttributeValue]()
var col1 = new AttributeValue()
col1.setS(a.get(0).toString)
ddbMap.put("col1", col1)
var item = new DynamoDBItemWritable()
item.setItem(ddbMap)
(new Text(""), item)
}
)
ddbInsertFormattedRDD.saveAsHadoopDataset(ddbConf)
scala.reflect.internal.Symbols$CyclicReference: illegal cyclic reference involving object InterfaceAudience
at scala.reflect.internal.Symbols$Symbol$$anonfun$info$3.apply(Symbols.scala:1502)
at scala.reflect.internal.Symbols$Symbol$$anonfun$info$3.apply(Symbols.scala:1500)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
Looks like spark UI not started, tried to start spark shell and also check sparkUI localhost:4040 running correctly.

Simulate session using WithServer

I am trying to port tests from using FakeRequest to using WithServer.
In order to simulate a session with FakeRequest, it is possible to use WithSession("key", "value") as suggested in this post: Testing controller with fake session
However when using WithServer, the test now looks like:
"render the users page" in WithServer {
val users = await(WS.url("http://localhost:" + port + "/users").get)
users.status must equalTo(OK)
users.body must contain("Users")
}
Since there is no WithSession(..) method available, I tried instead WithHeaders(..) (does that even make sense?), to no avail.
Any ideas?
Thanks
So I found this question, which is relatively old:
Add values to Session during testing (FakeRequest, FakeApplication)
The first answer to that question seems to have been a pull request to add .WithSession(...) to FakeRequest, but it was not applicable to WS.url
The second answer seems to give me what I need:
Create cookie:
val sessionCookie = Session.encodeAsCookie(Session(Map("key" -> "value")))
Create and execute request:
val users = await(WS.url("http://localhost:" + port + "/users")
.withHeaders(play.api.http.HeaderNames.COOKIE -> Cookies.encodeCookieHeader(Seq(sessionCookie))).get())
users.status must equalTo(OK)
users.body must contain("Users")
Finally, the assertions will pass properly, instead of redirecting me to the login page
Note: I am using Play 2.4, so I use Cookies.encodeCookieHeader, because Cookies.encode is deprecated

ContainerLaunchContext.setResource() missing of hadoop yarn

http://hadoop.apache.org/docs/r2.1.0-beta/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html
I am try to make the example work well from the above link.but I can't compile the code below
Resource capability = Records.newRecord(Resource.class);
capability.setMemory(512);
amContainer.setResource(capability);
// Set the container launch content into the
// ApplicationSubmissionContext
appContext.setAMContainerSpec(amContainer);
amContainer is ContainerLaunchContext and my hadoop version is 2.1.0-beta.
I did some investigation. I found there's no method "setResource" in ContainerLaunchContext
I have 3 question about this
1) the method has been removed or something?
2) if the method has been removed, how can I do now?
3) is there any doc about yarn, because I found the doc in website is very easy, I hope I can get a manual or something. for example,
capability.setMemory(512);
I don't know it's 512k or 512M according comments in code.
This is actually proper solution to the question. Previous answer might cause incorrect execution !!!
#Dyin I couldn't fit it in the comment ;) Validated for 2.2.0 and 2.3.0
Driver setting up resources for AppMaster:
ApplicationSubmissionContext appContext = app.getApplicationSubmissionContext();
ApplicationId appId = appContext.getApplicationId();
appContext.setApplicationName(this.appName);
// Set up the container launch context for the application master
ContainerLaunchContext amContainer = Records.newRecord(ContainerLaunchContext.class);
Resource capability = Records.newRecord(Resource.class);
capability.setMemory(amMemory);
appContext.setResource(capability);
appContext.setAMContainerSpec(amContainer);
Priority pri = Records.newRecord(Priority.class);
pri.setPriority(amPriority);
appContext.setPriority(pri);
appContext.setQueue(amQueue);
// Submit the application to the applications manager
yarnClient.submitApplication(appContext); // this.yarnClient = YarnClient.createYarnClient();
In ApplicationMaster this is how you should specify resources for containers (workers).
private AMRMClient.ContainerRequest setupContainerAskForRM() {
// setup requirements for hosts
// using * as any host will do for the distributed shell app
// set the priority for the request
Priority pri = Records.newRecord(Priority.class);
pri.setPriority(requestPriority);
// Set up resource type requirements
// For now, only memory is supported so we set memory requirements
Resource capability = Records.newRecord(Resource.class);
capability.setMemory(containerMemory);
AMRMClient.ContainerRequest request = new AMRMClient.ContainerRequest(capability, null, null,
pri);
return request;
}
Some run() or main() method in your AppMaster
AMRMClientAsync.CallbackHandler allocListener = new RMCallbackHandler();
resourceManager = AMRMClientAsync.createAMRMClientAsync(1000, allocListener);
resourceManager.init(conf);
resourceManager.start();
for (int i = 0; i < numTotalContainers; ++i) {
AMRMClient.ContainerRequest containerAsk = setupContainerAskForRM();
resourceManager.addContainerRequest(containerAsk); //
}
Launching containers
You can use the original answer solution (java cmd), but it's just a cherry on top. It should work anyway.
You can set memory available to ApplicationMaster via commend. As such:
// Set the necessary command to execute the application master
Vector<CharSequence> vargs = new Vector<CharSequence>(30);
...
vargs.add("-Xmx" + amMemory + "m"); // notice "m" indicating megabytes, you can use also -Xms combined with -Xmx
... // transform vargs to String commands
amContainer.setCommands(commands);
This should solve your problem. As for the 3 questions. Yarn is rapidly evolving software. My advice forget documentation, get source code and read it. This will answer a lot of your questions.

Resources