Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I've seen benchmarks of Actor model implementations done in terms of their actors. For example, Akka actors are very lightweight (600 bytes per actor) and millions of them can be created. However, I've never seen a benchmark done in terms of message-passing throughput.
For example, given some number of actors, how many messages can pass between them per second?
Does anyone have a link to such a performance benchmark (in terms of message-passing throughput)?
Here is a benchmark implemented in
Akka 0.8.1 (Scala)
Scala Actors
Jetlang (Java)
Also see Azul Vega 1 + Scala actors and Azul Fast Bytecodes for Funny Languages and this paper.
When I ran a performance test with this simple actor built around my implementation of the model it had a 444773.906 message per second throughput. Clearly it is a contrived test but it gives you a general idea of how it might perform in the wild.
private class TestActor : Actor<int, bool>
{
protected override void ProcessMessage(AsyncReplyPackage<int, bool> package)
{
package.ReplyChannel.Send(package.Message > 2000000);
}
}
static void Main(string[] args)
{
var r = false;
using (var ts = new TestActor())
using (var rc = new AsyncChannel<bool>())
{
ts.PostWithAsyncReply(0, rc);
r = rc.Receive();
var count = 3000000;
var sw = Stopwatch.StartNew();
for (int i = 0; i < count; i++)
{
ts.PostWithAsyncReply(i, rc);
r = rc.Receive();
}
Console.WriteLine(sw.Elapsed);
}
Console.WriteLine(r);
Console.ReadLine();
}
Size
I broke out the profiler and it looks like my implementation is 944 bytes. :(
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am intrigued with a question that an engineer at the company I work at asked me, about whether or not it is better to have a single function that traverses an array and tests two conditions or to have two functions, with a single condition each.
I came here to ask you guys if my rationale is wrong or not.
The code was something near this:
response := ListObjectsFromS3(bucket)
var filteredEmptyObjectsArray = utils.FilterEmptyObjects(response)
var filteredNonJson = utils.FilterNonJson(filteredEmptyObjectsArray)
With each function being:
func FilterEmptyObjects(arrayToFilter []*Object) []*Object {
var filteredArray []*Object
for _, object := range arrayToFilter {
if *object.Size > 0 {
filteredArray = append(filteredArray, object)
}
}
return filteredArray
}
func FilterNonJson(arrayToFilter []*Object) []*Object {
var filteredArray []*Object
for _, object := range arrayToFilter {
if strings.HasSuffix(*object.Key, ".json") {
filteredArray = append(filteredArray, object)
}
}
return filteredArray
}
Please pardon the repetition in the code above. It is meant as a toy example.
I don't know exactly how does Go optimizes this code, but I was thinking it might "squash" both functions into something like this - of course, not in Go code but the generated machine code would be equivalent to this:
func FilterSquashed(arrayToFilter []*Object) []*Object {
var filteredArray []*Object
for _, object := range arrayToFilter {
if strings.HasSuffix(*object.Key, ".json") && *object.Size > 0 {
filteredArray = append(filteredArray, object)
}
}
return filteredArray
}
And the code of the response - also not really in Go code, but the compiler would generate a machine code equivalent to something like this:
response := utils.FilterSquashed(ListObjectsFromS3(bucket))
The point is that when I made the objdump of the optimized code and the non-optimized one, both had the functions separated and would have a CALL to each function. So, I'm trying to understand what is the depth of optimization that is currently possible or that Go compiler decided to stick with.
Let me know your thoughts
The "squashed" code you show is not equivalent to the original code. The basic rule of code optimization is that the effects of the optimized and non-optimized code must be the same but in your example, you have two functions that apply different logic to filter a list, and a third function that would apply a third kind of logic that in this particular case would give you the composition of the two original functions, but not in the general case. So in short: no compiler would do what you are asking for in this case, because the semantics are different.
There may be cases when some functions are inlined the compiler might discover more optimizations, but I don't see how your example might benefit from inlining.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 3 years ago.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Improve this question
JWT implementations might be exposed to different attacks, one of them is the alg:none attack (see more details here).
I'm using spring-security-jwt dependency in my pom.xml file, and was not able to find out whether this implementation deals with the alg:none attack.
Is this attack mitigated by the spring security JWT implementation?
If you are using spring-security-oauth/spring-security-jwt then yes, This attack is mitigated. As per the link you have shared, one way to mitigate this attack is by considering a JWT token with header with "alg":"none" as invalid or not rely on the alg header when selecting the algorithm.
In the source code for spring-security-jwt file JwtHelper in the decode method does not rely on the alg header when selecting the algorithm.
public static Jwt decode(String token) {
int firstPeriod = token.indexOf('.');
int lastPeriod = token.lastIndexOf('.');
if (firstPeriod <= 0 || lastPeriod <= firstPeriod) {
throw new IllegalArgumentException("JWT must have 3 tokens");
}
CharBuffer buffer = CharBuffer.wrap(token, 0, firstPeriod);
// TODO: Use a Reader which supports CharBuffer
JwtHeader header = JwtHeaderHelper.create(buffer.toString());
buffer.limit(lastPeriod).position(firstPeriod + 1);
byte[] claims = b64UrlDecode(buffer);
boolean emptyCrypto = lastPeriod == token.length() - 1;
byte[] crypto;
if (emptyCrypto) {
if (!"none".equals(header.parameters.alg)) {
throw new IllegalArgumentException(
"Signed or encrypted token must have non-empty crypto segment");
}
crypto = new byte[0];
}
else {
buffer.limit(token.length()).position(lastPeriod + 1);
crypto = b64UrlDecode(buffer);
}
return new JwtImpl(header, claims, crypto);
}
There is no document or compilation of vulnerabilities in spring-security-jwt but you can check the issues section under spring-security-jwt and report any vulnerabilities you think which needs to be patched.
I was thinking as bots have some generic questions like how are you ? may be i have around 10 answers which i would like Q&A maker to choose randomly not every time same answer.
or also questions like tell me a story
some generic questions like how are you ? may be i have around 10 answers which i would like Q&A maker to choose randomly not every time same answer.
To achieve this requirement, you can try this approach:
1) Add a QnA pair and use a special character (such as |) to split answers for question how are you?
2) Override the RespondFromQnAMakerResultAsync method, and split response and retrieve answer randomly in this method
protected override async Task RespondFromQnAMakerResultAsync(IDialogContext context, IMessageActivity message, QnAMakerResults result)
{
// This will only be called if Answers isn't empty
var response = result.Answers.First().Answer;
var answersforhowareyou = response.Split('|');
if (answersforhowareyou.Count() > 1)
{
Random rnd = new Random();
int index = rnd.Next(answersforhowareyou.Count());
response = answersforhowareyou[index];
}
await context.PostAsync(response);
}
Test result:
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am making web crawler. I already tried async http client like the one from scala tutorial scaling-out-with-scala-and-akka and spray but i can't manage to make it work. For now performance is not the most important part for me, but later on i would like to easy improve req/s ratio without changing library.
Library should be able to operate on http headers and should not have performance issues with dns resolving. What library should be the best for the task?
Spray should be sufficient for that. Even with this very simple code on a 16mbit connection I can search through around 8 pages per second, i.e. 700,000 pages per day.
It fetches all the links on the main page of wikipedia, loads all those pages and then fetches all the links on those pages.
The problem is that wikipedia's server probably limits the traffic per client, so if I access several sites at once I should get much more speed.
It uses parallel collections to speed it up and avoid delay through dns resolving. But if you write this properly with actors and or futures, using a library like spray I'm guessing it would be faster.
import io.Source
def time[T](f: => T): T = {
val start = System.nanoTime
val r = f
val end = System.nanoTime
val time = (end - start)/1e6
println("time = " + time +"ms")
r
}
val domain = "https://en.wikipedia.org"
val startPage = "/wiki/Main_Page"
val linkRegex = """\"/wiki/[a-zA-Z\-_]+\"""".r
def getLinks(html: String): Set[String] =
linkRegex.findAllMatchIn(html).map(_.toString.replace("\"", "")).toSet
def getHttp(url: String) = {
val in = Source.fromURL(domain + url, "utf8")
val response = in.getLines.mkString
in.close()
response
}
val links = getLinks(getHttp(startPage))
links.foreach(println)
println(links.size)
val allLinks = time(links.par.flatMap(link => getLinks(getHttp(link))))
println(allLinks.size)
I am running some experiments, timing them and comparing the times to find the best "algorithm". The question that came up was if running the tasks in parallel would make the relative runningtimes of the experiments wrong and if I would get more representative results by running them sequentially. Here is a (simplified) version of the code:
public static void RunExperient(IEnumerable<Action> experiments)
{
Parallel.ForEach(experiments, experiment =>
{
var sw = Stopwatch.StartNew(); //line 1
experiment(); //line 2
sw.Stop(); //line 3
Console.WriteLine(#"Time was {0}", sw.ElapsedMilliseconds);
});
}
My questions are about what is happening "behind the scenes":
When a task has started, is it possible that the OS or the framework can suspend the task during its execution and continue on later making the running time of the experiment all wrong?
Would I get more representative results by running the experiments sequentially?
That depends on the machine that you are running on and what the experiments do, but generally the answer is yes, they may affect one another. Mainly through resource starvation. Here's an example:
public class Piggy {
public void GreedyExperiment() {
Thread.Priority = ThreadPriority.Highest;
for (var i=0;i<1000000000;i++) {
var j = Math.Sqrt(i / 5);
}
}
}
That's going to do a tight loop on a high priority thread, which will basically consume one processor until it is done. If you only have one processor in the machine and TPL decides to schedule two experiments on it, the other one is going to be starved for CPU time.