Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
My company is interested in designing a protection system for small industrial machines, where factory employees insert wooden objects that are then cut by these machines.
The protection system needs to be aware of the presence of a human arm when inserted into the input-hall - the human arm will then put a piece of wood and the machine will cut the wood - however, the protection system needs to be able to detect the presence of blood in case for some reason the arm is cut - and in that case shut off the power of the industrial machine.
I'm no expert on sensing technologies and as we are looking to hire one, I am asking for advice on the proper sensing technology that can fit these requirements.
Capacitive sensing, as I understand - can not only detect the presence of an object - but also the type (e.g. distinguish human arms from blood) - can such technology be used for the purpose mentioned above?
Thanks,
Arkadi
I wonder if something like this could help: http://www.sawstop.com/ I'm a woodworker as well and have been considering this device. I am not sure if it can distinguish arm from blood but it seems to be able to sense 'flesh' and shut down.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Recent developments in gpus (the past few generations) allow them to be programmed. Languages like Cuda, openCL, openACC are specific to this hardware. In addition, certain games allow programming shaders which function in the rendering of images in the graphics pipeline. Just as code intended for a cpu can cause unintended execution resulting a vulnerability, I wonder if a game or other code intended for a gpu can result in a vulnerability.
The benefit a hacker would get from targeting the GPU is "free" computing power without having to deal with the energy cost. The only practical scenario here is crypto-miner viruses, see this article for example. I don't know details on how they operate, but the idea is to use the GPU to mine crypto-currencies in the background, since GPUs are much more efficient than CPUs at this. These viruses will cause substential energy consumption if unnoticed.
Regarding an application running on the GPU causing/using a vulnerability, the use-cases here are rather limited since security-relevant data usually is not processed on GPUs.
At most you could deliberately make the graphics driver crash and this way sabotage other programs from being properly executed.
There already are plenty security mechanisms prohibiting reading other processes' VRAM etc., but there always is some way around.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I'm developing a preventive maintenance strategy for an industrial plant based on RCM (Reliability Centered Maintenance) methodology. For this job, I need to choose one CMMS (Computerized maintenance management system) among several options but I need to do it in a clever way.
Is there some technical procedure to make a Comparative Assessment of Software Programs and get to know what is the better CMMS option? Any standard, table or matrix?
Thank you so much
I found an insteresting document with a Comparative Assessment of Software
Programs very useful for me developed by the IRIS Center at the University of Maryland.
Comparative Assessment of Software
Programs for the Development of
Computer-Assisted Personal Interview
(CAPI) Applications
This scientific article could be useful as well:
Comparative Assessment of Software Quality Classification Techniques: An Empirical Case Study
EDIT
There are Sites like Quora that are better places to make this kind of questions
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I’m interested in segmentation, feature detection, image processing algorithms, etc. I’ve done a few searches on the internet about conferences or seminars that would be interesting and more importantly helpful to connecting with other people in my field. Any suggestions on the best US conferences that deal with image processing?
There are a couple of good conferences, most sponsored by IEEE, like CVPR(Conference on Computer Vision and Pattern Recognition) and ICIP (International Conference on Image Processing). Both CVPR and ICIP usually has a minimum amount of exhibitors; so if you want to listen to speakers and not get lost in a sea of exhibitors, these are for you. ASPRS has one on Imaging and Geospatial Technologies. There is also the SAIM Conference on Imaging Science next year in May.
I’ve used this website several times. It has basically every conference and event relating to image processing and computer vision in the US and international. The author keeps everything up to date and nicely organized.
My company, Wolfram Research, is holding a few image processing events on a much smaller scale.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Like many of you, I like to play a game every once in a while. Since I use Linux for my daily tasks (programming, writing papers, browsing, etc.), I solely exploit my graphics-card capabilities in Windows while gaming.
Lately I noticed my energy-bills got really high and I would like to reduce the energy consumption of my computer. My graphics-card uses 110 watt idle, whereas a low-end Radeon HD5xxx only uses 5 watt. I think my computer is powered on 40 hours a week, whereof only 3 hours of gaming. This means I waste 202 kWh a year (!).
I figured I could just buy a DVI splitter and a low-end Radeon-card, and disable the PCI-slot of the high-end card in Linux. I Googled a bit, but I'm not sure which search-terms to use, so I haven't found anything useful.
Too long, didn't read: Is it possible to cut of the power of a PCI-slot using Linux?
No.
What your asking isn't even a "Linux" question, but a motherboard question - is it electrically possible to do this.
The answer is still no.
The only chance you would have, would be to get the spec of the chip/card which is in the slot, and see if there is a bit you can set on it which would "disable" it, or put it into some "low power mode".
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Someone within my organization has started pushing for us to pilot the CMU SEI's TSP process (see website here). I have an instinctual aversion to any attempts to cure software development illnesses with alphabet soup, but I would like to know if anyone has experience with this process and can provide tangible facts.
I used to be a fan of SEI's CMM. I even read Watts Humphrey's "Managing the Software Process" book cover to cover. I haven't used TSP but I suspect it has similar strenghts and weaknesses as the other software processes.
Definitely read about it and what they claim it can do and how to implement it, but be vigilant about keeping your software process small and flexible. You need one, but be careful about taking processes from someone else.
good luck.
We've been using this process for a few months now and I'm not particularly impressed. This process is only suitable for a strict command and control style of management where programmers are essentially bean counters. Most of the good parts of this process (size estimates rather than time estimates, self reviews, detailed plans, logging time against plans, and keeping a log of defects and errors for later review) can be implemented without throwing a bunch of money at SEI.