Calling antiviruses from software to scan in-memory images [closed] - windows

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Our system periodically (several times a minute) calls an external service to download images. As security is a priority, we are performing various validations and checks on our proxy service, which interfaces with the external service.
One control we are looking into is anti-malware which is supposed to scan the incoming image and discard it if it contains malware. The problem is that our software does not persist the images (where they can be scanned the usual way) and instead holds them in an in-memory (RAM) cache for a period of time (due to the large volume of images).
Do modern antiviruses offer APIs that can be called by the software to scan a particular in-memory object? Does Windows offer a unified way to call this API across different antivirus vendors?
On a side note, does anybody have a notion of how this might affect performance?

You should contact antivirus manufacturers - Some of them do, but you probably find it tricky to find out the pricing even.
Windows has AMSI which has a stream interface and a buffer interface. I am unaware if it makes a copy of the data in the buffer or scans the buffer as it is.
And it will absolutely wreck your performance, probably.
What might be faster would be to just have some code to assure that they are in fact images that can be read and re-encoded, but then there are obvious problems with re-encoding .jpg images, so maybe just sanity check the header and data with them. This could also be slower. decoding large images is slow, but it would probably catch 0 day exploits targeting libpng/libjpeg better.
Also you could read some horror stories of scanning servers like that being targets of malware in otherwise benign files, though the last one I remember is from last decade.

Related

Are there any computer viruses that affect gpus? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Recent developments in gpus (the past few generations) allow them to be programmed. Languages like Cuda, openCL, openACC are specific to this hardware. In addition, certain games allow programming shaders which function in the rendering of images in the graphics pipeline. Just as code intended for a cpu can cause unintended execution resulting a vulnerability, I wonder if a game or other code intended for a gpu can result in a vulnerability.
The benefit a hacker would get from targeting the GPU is "free" computing power without having to deal with the energy cost. The only practical scenario here is crypto-miner viruses, see this article for example. I don't know details on how they operate, but the idea is to use the GPU to mine crypto-currencies in the background, since GPUs are much more efficient than CPUs at this. These viruses will cause substential energy consumption if unnoticed.
Regarding an application running on the GPU causing/using a vulnerability, the use-cases here are rather limited since security-relevant data usually is not processed on GPUs.
At most you could deliberately make the graphics driver crash and this way sabotage other programs from being properly executed.
There already are plenty security mechanisms prohibiting reading other processes' VRAM etc., but there always is some way around.

Website Performance Issue [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
If a website is experiencing performance issues all of a sudden, what can be the reasons behind it?
According to me database can one reason or space on server can be one of few reasons, I would like to know more about it.
There can be n number of reasons and n depends on your specification
According to what you have specified you can have a look at,
System counters of webserver/appserver like cpu, memory, paging, io, disk
What changes you did to application if any, were those changes performance costly i.e. have a round of analysis on those changes to check whether any improvement is required.
If system counters are choking then check which one is bottleneck and try to resolve it.
Check all layers/tiers of application i.e. app server, database, directory etc.
if database is bottleneck then identify costly queries and apply indexes & other DB tuning
If app server is choking then, you need to identify & improve the method which is resource heavy.
Performance tuning is not a fast track process, it takes time, identify bottlenecks and try to solve it and repeat the process until you get desired performance.

Suggestions for an Oracle data modeler that can reverse engineer and handle very large databases [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
By very large, I mean on the realm of thousands of tables. I've been able to use Toad Data Modeler to do the reverse engineering, but once it loads up the txp file it creates, it just croaks. Even attempting to split up the model isn't possible as TDM just sits, frozen.
So, I was wondering what other options are out there, and perhaps if there are any 32 bit applications that can handle such a database model (considering the memory used by this is ~750MB, I would think it not too large for a 32 bit computer with max RAM).
Also to note, I am not trying to create a diagram with this (such a huge diagram would be effectively useless unless you already knew the system), but am instead needing to export the design of the database. So the data model tool doesn't need to support any sort of fanciful graphics, which may not be possible with the given size anyways.
Edit:
I've found a potential solution which leads to TDM working. You have to close the project, close TDM, open TDM, and then open the project. If you just kill the process while it is frozen, this will not work. What this does is zoom the screen showing the graphical representation to the normal view level, while normally after reverse engineering, the entire database is put into the view (if you just kill the process, when you open up the file again, you will see the entire database). While I am not certain the details, it appears being zoomed it makes it so that TDM runs much smoother and does not freeze or crash, and as such I am able to keep working in it to do what I need.
How about Oracle's own SQL Developer Data Modeler?

What environment do I need for Testing Big Data Frameworks? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As part of my thesis i have to evaluate and test some Big Data Frameworks like Hadoop or Storm. What minimal setup would you recommend to get some relevant Information about Performance and scalability? What Cloud Plattforms would be best suitable for this? Since im evaluating more than one Framework a out of the box PaaS - Solution wouldnt be the best choice. right? Whats the minimal number of nodes/servers to get some relevant Information? The cheaper the better, since the company im doing it for wont probably grant me a 20 Machine Cluster ;)
thanks a lot,
kroax
Well, you're definitely going to want at least two physical machines. Anything like putting multiple VMs on one physical machine is out of the question, as then you don't get the network overhead that's typical of distributed systems.
Three is probably the absolute minimum you could get away with as being a realistic scenario. And even then, a lot of the time, the overhead of Hadoop is just barely outweighed by the gains.
I would say five is the most realistic minimum, and a pretty typical small cluster size. 5 - 8 is a good, small range.
As far as platforms go, I would say Amazon EC2/EMR should always be a good first option to consider. It's a well-established, great service, and many real-world clusters are running on it. The upsides are that it's easy to use, relatively inexpensive, and representative of real-world scenarios. The only downside is that the virtualization could cause it to scale slightly differently than individual physical machines, but that may or may not be an issue for you. If you use larger instance types, I believe they are less virtualized.
Hope this helps.

refactor old webapp to gain speed [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
4 years ago, I've built a webapp which is still used by some friends. the problem with that app, is that now it has a huge database, and it loads very slow. I know that is just my fault, mysql queries are mixted all over the places(even in the layout generation time).
ATM I know some about OO. I'll like to use this knowledge in my old app, but I don't know how to do it without rewriting all the from the beginning. Using MVC for my app, is very difficult at this moment.
If you were in my place, or if you will had the task to improve the speed of my old app, how you will do it? Do you have any tips for me? Any working scenarios?
It all depends on context. The best would be to change the entire application, introducing best practices and standards at once. But perhaps would be better to adopt an evolutionary approach:
1- Identify the major bottlenecks in the application using a profiling tool or load test.
2 - Estimate the effort required to refactoring each item.
3 - Identify the pages for which performance is more sensitive to the end user.
4 - Based on the information identified create a task list and set the priority of each item.
Attack one prolem at a time, making small increments. Always trying to spend 80% of your time solving the 20% more critical problems.
Hard to give specific advice without a specific question, but here are some general optimization/organization techniques:
Profile to find hot spots in your code
you mention mysql queries being slow to load, try to optimize them
possibly move data base access to stored procedures to help modularize your code
look for repeated code and try to move it to objects one piece at a time

Resources