Cytoscape vs STRING for long list of proteins - bioinformatics

I am mid-way through my university project, and I have run into an issue. I have a long list of around 1000 proteins that I wanted to analyse in STRING, however, my list is too large. I decided to try and utilise Cytoscape (and downloaded the stringApp), but the networks generated are still very messy. I've attached a screenshot here. Is there any way to improve the presentation of the network by downloading any Cytoscape apps or by tweaking the settings?
Thanks in advance

Well, the short answer is "no". A slightly longer answer is "it depends".
Showing a hairball really isn't helpful, usually, so you need to refine things somewhat. What is your data source (i.e. where did the 1000 proteins come from)? What do you hope to see in the network? If you are looking for particular groups of proteins (e.g. complexes), you would probably want to use MCL to cluster them first. If you have some other data you want to map, such as transcriptomic or proteomic data, you could refine your network based on fold change or abundance values.
All that being said, somethings you might try. First, you are seeing the "fast" version of the network. Try clicking on the show graphics details button (the diamond in the network view tool bar). That will give you the full graphics details. Second, you might try spreading the network out a bit by using the Layout->Layout Tools. Turn off the "Selected Only" and then adjust the scale. Finally, depending on your biological question, you might want to eliminate proteins that are only present in the nucleus or cytoplasm, or are only in lung tissue. This is all possible using the sliders provided by the stringApp's Results Panel.
-- scooter

Related

Detect sexual content in image and text

I have started a social networking app and there is one user who won't stop uploading images of woman, who, well, are up to some sexual activities. He additionally adds offensive captions to them.
My question: how can I detect adult content in images and text and block them from my app? I think this is a problem that most people face who are making any kind of open networking app. It would be great if the solution was as fast and low-priced as possible.
Implement a system which essentially stores {256-sha image hash, human rating, computer rating} into a database.
Create an interface for the human rating and the computer rating which can judge and categorize images as well as an interface in your software which can use that information on how to handle such images.
Choose a tool, likely a convolutional neural network based algorithm, with an easy to use api. Here's a random result from searching: https://imagga.com/solutions/adult-content-moderation.html
Put everything together and you should have a system which can automatically guess how to handle images, but also allows you to iterate through them which both corrects the database as well as trains the rating algorithm which trains based on the existing human produced data.
Note: The status of an image is not permanent by the software unless a human rates it. Whenever one is accessed, the latest state of the image detection decides on it. If this happens far too frequently to support, then associate a time buffer with the image so that it doesn't re-rate it often.
Update: The advantages of this custom solution is that you can control things to work the way you want. You can define the rating system and how to handle the situation as well as governance over whatever set of trained algorithms you are using. You always have the final say and you can see what is going on at all times. The catch is that you would need to implement this software as an extension to your project.
Not easily, it would require machine learning techniques and a ton of training. Not to mention, all modern techniques can easily be tricked.
There are a few moderation solutions, but they aren't ideal.
First, you could ban them. Not the best, as they could make another account, but it means that they have to make another email for it.
Second, you could isolate him. I forget exactly how it works, but the idea is that they still think that they are posting on your app, but none of their content gets propagated to other people.
I don't know the legality of either of these, its all up to your terms and conditions. But AI is not really a good option, especially if your app were to need to scale.

OCR for scanning printed receipts. [duplicate]

Would OCR Software be able to reliably translate an image such as the following into a list of values?
UPDATE:
In more detail the task is as follows:
We have a client application, where the user can open a report. This report contains a table of values.
But not every report looks the same - different fonts, different spacing, different colors, maybe the report contains many tables with different number of rows/columns...
The user selects an area of the report which contains a table. Using the mouse.
Now we want to convert the selected table into values - using our OCR tool.
At the time when the user selects the rectangular area I can ask for extra information
to help with the OCR process, and ask for confirmation that the values have been correct recognised.
It will initially be an experimental project, and therefore most likely with an OpenSource OCR tool - or at least one that does not cost any money for experimental purposes.
Simple answer is YES, you should just choose right tools.
I don't know if open source can ever get close to 100% accuracy on those images, but based on the answers here probably yes, if you spend some time on training and solve table analisys problem and stuff like that.
When we talk about commertial OCR like ABBYY or other, it will provide you 99%+ accuracy out of the box and it will detect tables automatically. No training, no anything, just works. Drawback is that you have to pay for it $$. Some would object that for open source you pay your time to set it up and mantain - but everyone decides for himself here.
However if we talk about commertial tools, there is more choice actually. And it depends on what you want. Boxed products like FineReader are actually targeting on converting input documents into editable documents like Word or Excell. Since you want actually to get data, not the Word document, you may need to look into different product category - Data Capture, which is essentially OCR plus some additional logic to find necessary data on the page. In case of invoice it could be Company name, Total amount, Due Date, Line items in the table, etc.
Data Capture is complicated subject and requires some learning, but being properly used can give quaranteed accuracy when capturing data from the documents. It is using different rules for data cross-check, database lookups, etc. When necessary it may send datafor manual verification. Enterprises are widely usind Data Capture applicaitons to enter millions of documents every month and heavily rely on data extracted in their every day workflow.
And there are also OCR SDK ofcourse, that will give you API access to recognition results and you will be able to program what to do with the data.
If you describe your task in more detail I can provide you with advice what direction is easier to go.
UPDATE
So what you do is basically Data Capture application, but not fully automated, using so-called "click to index" approach. There is number of applications like that on the market: you scan images and operator clicks on the text on the image (or draws rectangle around it) and then populates fields to database. It is good approach when number of images to process is relatively small, and manual workload is not big enough to justify cost of fully automated application (yes, there are fully automated systems that can do images with different font, spacing, layout, number of rows in the tables and so on).
If you decided to develop stuff and instead of buying, then all you need here is to chose OCR SDK. All UI you are going to write yoursself, right? The big choice is to decide: open source or commercial.
Best Open source is tesseract OCR, as far as I know. It is free, but may have real problems with table analysis, but with manual zoning approach this should not be the problem. As to OCR accuracty - people are often train OCR for font to increase accuracy, but this should not be the case for you, since fonts could be different. So you can just try tesseract out and see what accuracy you will get - this will influence amount of manual work to correct it.
Commertial OCR will give higher accuracy but will cost you money. I think you should anyway take a look to see if it worth it, or tesserack is good enough for you. I think the simplest way would be to download trial version of some box OCR prouct like FineReader. You will get good idea what accuracy would be in OCR SDK then.
If you always have solid borders in your table, you can try this solution:
Locate the horizontal and vertical lines on each page (long runs of
black pixels)
Segment the image into cells using the line coordinates
Clean up each cell (remove borders, threshold to black and white)
Perform OCR on each cell
Assemble results into a 2D array
Else your document have a borderless table, you can try to follow this line:
Optical Character Recognition is pretty amazing stuff, but it isn’t
always perfect. To get the best possible results, it helps to use the
cleanest input you can. In my initial experiments, I found that
performing OCR on the entire document actually worked pretty well as
long as I removed the cell borders (long horizontal and vertical
lines). However, the software compressed all whitespace into a single
empty space. Since my input documents had multiple columns with
several words in each column, the cell boundaries were getting lost.
Retaining the relationship between cells was very important, so one
possible solution was to draw a unique character, like “^” on each
cell boundary – something the OCR would still recognize and that I
could use later to split the resulting strings.
I found all this information in this link, asking Google "OCR to table". The author published a full algorithm using Python and Tesseract, both opensource solutions!
If you want to try the Tesseract power, maybe you should try this site:
http://www.free-ocr.com/
Which OCR you are talking about?
Will you be developing codes based on that OCR or you will be using something off the shelves?
FYI:
Tesseract OCR
it has implemented the document reading executable, so you can feed the whole page in, and it will extract characters for you. It recognizes blank spaces pretty well, it might be able to help with tab-spacing.
I've been OCR'ing scanned documents since '98. This is a recurring problem for scanned docs, specially for those that include rotated and/or skewed pages.
Yes, there are several good commercial systems and some could provide, once well configured, terrific automatic data-mining rate, asking for the operator's help only for those very degraded fields. If I were you, I'd rely on some of them.
If commercial choices threat your budget, OSS can lend a hand. But, "there's no free lunch". So, you'll have to rely on a bunch of tailor-made scripts to scaffold an affordable solution to process your bunch of docs. Fortunately, you are not alone. In fact, past last decades, many people have been dealing with this. So, IMHO, the best and concise answer for this question is provided by this article:
https://datascience.blog.wzb.eu/2017/02/16/data-mining-ocr-pdfs-using-pdftabextract-to-liberate-tabular-data-from-scanned-documents/
Its reading is worth! The author offers useful tools of his own, but the article's conclusion is very important to give you a good mindset about how to solve this kind of problem.
"There is no silver bullet."
(Fred Brooks, The Mitical Man-Month)
It really depends on implementation.
There are a few parameters that affect the OCR's ability to recognize:
1. How well the OCR is trained - the size and quality of the examples database
2. How well it is trained to detect "garbage" (besides knowing what's a letter, you need to know what is NOT a letter).
3. The OCR's design and type
4. If it's a Nerural Network, the Nerural Network structure affects its ability to learn and "decide".
So, if you're not making one of your own, it's just a matter of testing different kinds until you find one that fits.
You could try other approach. With tesseract (or other OCRS) you can get coordinates for each word. Then you can try to group those words by vercital and horizontal coordinates to get rows/columns. For example to tell a difference between a white space and tab space. It takes some practice to get good results but it is possible. With this method you can detect tables even if the tables use invisible separators - no lines. The word coordinates are solid base for table recog
We also have struggled with the issue of recognizing text within tables. There are two solutions which do it out of the box, ABBYY Recognition Server and ABBYY FlexiCapture. Rec Server is a server-based, high volume OCR tool designed for conversion of large volumes of documents to a searchable format. Although it is available with an API for those types of uses we recommend FlexiCapture. FlexiCapture gives low level control over extraction of data from within table formats including automatic detection of table items on a page. It is available in a full API version without a front end, or the off the shelf version that we market. Reach out to me if you want to know more.
Here are the basic steps that have worked for me. Tools needed include Tesseract, Python, OpenCV, and ImageMagick if you need to do any rotation of images to correct skew.
Use Tesseract to detect rotation and ImageMagick mogrify to fix it.
Use OpenCV to find and extract tables.
Use OpenCV to find and extract each cell from the table.
Use OpenCV to crop and clean up each cell so that there is no noise that will confuse OCR software.
Use Tesseract to OCR each cell.
Combine the extracted text of each cell into the format you need.
The code for each of these steps is extensive, but if you want to use a python package, it's as simple as the following.
pip3 install table_ocr
python3 -m table_ocr.demo https://raw.githubusercontent.com/eihli/image-table-ocr/master/resources/test_data/simple.png
That package and demo module will turn the following table into CSV output.
Cell,Format,Formula
B4,Percentage,None
C4,General,None
D4,Accounting,None
E4,Currency,"=PMT(B4/12,C4,D4)"
F4,Currency,=E4*C4
If you need to make any changes to get the code to work for table borders with different widths, there are extensive notes at https://eihli.github.io/image-table-ocr/pdf_table_extraction_and_ocr.html

How the computer knows "Recommended for You"?

Recently, I found several web site have something like : "Recommended for You", for example youtube, or facebook, the web site can study my using behavior, and recommend some content for me... ...I would like to know how they analysis this information? Is there any Algorithm to do so? Thank you.
Amazon and Netflix (among others) use a technique called Collaborative filtering to suggest things you might like based on the likes/dislikes of others who have made purchases and selections similar to yours.
Is there any Algorithm to do so?
Yes
Yes. One fairly common one is to look at things you've selected in the past, find other people who've made those selections, then find the other selections most common among those other people, and guess that you're likely to be interested in those as well.
Yup there are lots of algorithms. Things such as k-nearest neighbor: http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm.
Here is a pretty good book on the subject that covers making these sorts of systems along with others: http://www.amazon.com/gp/product/0596529325?ie=UTF8&tag=ianburriscom-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=0596529325.
It's generally done by matching you with other users who have similar usage history / profile and then recommending other things that they've purhased/watched/whatever.
Searching for "recommendation algorithm" yields lots of papers. Most algorithms incorporate "machine learning" algorithms to determine groups of things (comedy movies, books on gardening, orchestral music, etc.). Your matching with those groups yields recommendations. Some companies use humans to classify things, too.
Such an algorithm is going to vary wildly from company to company. In many cases, it analyzes some combination of your search history, purchase history, physical location, and other factors. It probably will also compare purchases/searches amongst other people to find what those people have purchased/searched for, and recommend some of those products to you.
There are probably hundreds of these algorithms out there, but I doubt you can use any of them (that are actually good). Probably you are better off figuring it out yourself.
If you can categorize your contents (i.e. by tagging or content analysis), you can also categorize your users and their preferences.
For example: you have a video portal with 5 million videos .. 1 mio of them are tagged mostly red. If 80% of all videos watched by a user (who is defined by an IP, a persistent user account, ...) are tagged mostly red, you might want to recommend even more red videos to him. You might want to refine your recommendations by looking at his further actions: does he like your recommendations -- if so, why not give him even more, if not, try the second-best guess, maybe he's not looking for color, but for the background music ...
There's no absolute algorithm to do it, but all implementations will go into a similar direction. It's always basing on observing users, which scares me from time to time :-)
There's whole lot of algorithms tackling the issue: Wiki article. It's a Machine Learning domain problem. Computer's can be learned using two main techniques: classification and clustering. They require some datasets as input. If the dataset is informative (really holds some useful patterns) than those ML techniques can dig most of it.
Clustering could be best to use for this kind of problem. It's main usage is to find similarities among points in provided dataset. If the points are, e.g. your search history, they can be grouped together to form certain clusters. If Your search history closely relates to another, a hint can be given - picking links that are most similar to Your's.
The same comes with book recommendations - it's obvious what dataset they use: "Other people who bought this product also bought Product A, Product B,...". The key here is to match your profile to other's and use the most similar to recommend.
The computer retrieves information from the human brain with complex memory scan process, sorts it accordingly and outputs results based on what you have experienced in your life so far.

Is there an algorithm for positioning nodes on a link chart?

I'm a member of a small but fairly sociable online forum, and just for fun we've been plotting a chart of who's met who in real life. Here's what it looked like fairly recently.
(The colour is the "distance" from the currently-selected user, e.g., yellow is someone who's met someone who's met them. And no, I'm not Zak.) Apologies for the faded lines, they don't seem to have weathered the SO upload process very well.
It's generated as SVG, with a big block of JSON defining who's met who. The position (x,y) of each member on the chart is hard-coded into that JSON. Until now, it's been fairly easy to cope when someone meets someone else - at worst, maybe two or three people need to be shuffled around - but it does involve editing the co-ordinates manually. And now that the European and North American contingents are meeting up, and a few on the periphery are showing up at meets, all hell is breaking loose...
We can put some effort into making all the nodes draggable, which would make the job of re-arranging a bit less tiresome. But it seems more sensible to let the computer take care of positioning them, especially as the problem will only get harder with more members.
So, does anyone know of an algorithm for positioning these nodes on the chart, based on which other nodes they're linked with?
Ideally, it would
minimise or avoid long links
avoid having lines run underneath unrelated nodes
take account of the fact that well-connected nodes are bigger
do its best to show the wider "all these guys met each other" relationships (the big circle at the bottom is largely the result of one meet, for example, though the chart has no idea of when any two people met)
but if it gets us close enough to tweak it, that's progress.
And, what's the real name for these charts? I believe they're called "link charts", but I'm not getting good results from Google using that name or anything else I can think of.
We'll likely be implementing this in PHP or Javascript, but right now it's how to begin approaching the problem that's the bigger question.
Edit: Some great answers coming already. I would be very interested in the actual algorithm(s) used, though, as well as tools that do the job.
What you are looking for are f.e. force-based algorithms. There are quite a few libraries, and some have been named already, like prefuse, yWorks. Here a few more: jung, gvf, jGraph.
The real name for it is "graph". To generate graph, and have a good layout algorithm, the best is to use a software which will do the job.
I advise you to use Gephi.
This soft is able to do all the things you want to.
Have a look at the yWorks tools.
You can google for graph visualization. There are more libraries for this, including GraphViz, but probably not all your requirements will be met.
If you can deal w/ Java, take a look at prefuse.
Have a look at NodeXL
Also, this book may be relevant.

Techniques for visualising change over time in graphs

I'm looking to display a graph (network diagram, not a chart) and show its changes over time. Is there a standard or best way to do this, or any kind of 'network diff' tool?
I'm looking for an overview of the general layout decisions involved, i.e. a list of options and trade-offs to be made, and best-practice guidelines where these exist.
Wow. Not an easy question! I'm curious if anyone can come up with some authoritative resources for you.
I haven't found any standard or best practice documented anywhere from a design standpoint, nor do I know of any tool specifically designed for determining and displaying the changes, but I have some ideas.
First, a few technical notes. There's GraphML, which you can use (and extend) to represent your graph in a standard format, and there are some parsers available, and it works with Prefuse and probably other display libraries. It's just XML, though - nothing too special. Creating the "diff" by comparing two GraphML files should be pretty simple.
The really interesting part is how to communicate the differences to the user.
In all cases, you should have a visual indicator for nodes and edges that are added or removed. You may use color, showing existing nodes as something neutral, say gray, new nodes as green, and removed nodes as red. There are lots of options.
You might find this slideshow interesting.
It's probably obvious, but, over time, the nodes should not move more than necessary to adapt to the new state of the graph - the layout should evolve, not start from scratch for every state. This is crucial for comparing the states.
Side-by-side before/after comparison. Present before and after snapshots of the same graph side-by-side. If your graph is very large and complicated, a side-by-side layout may be impractical. You could try overlaying one graph over the other, though that is likely to be disorienting.
Side-by-side series comparison. AKA small multiples. Same as above but showing as many points in time as is useful. Even more restrictive than before-after in terms of how much space required, and difficult for.
Animate a single graph. I think the most intuitive method is to smoothly animate the graph changes, though a choppy slideshow could work if the changes between slides are not too drastic.
Showing details. If useful, you can spell out the change event details in a few different ways.
Show labels on the graph node (could be interactive if there are too many to show at once)
Show a list in a sidebar / legend. Nice if reading the progression of changes is useful, but harder to connect to the visual.
Show a timeline instead of a list. This shows the 'real' progression of events better than a simple list, which gives the impression that all the events are evenly spaced over time.
What you actually choose to do would depend largely on the nature of your dataset and your goals. A simple graph of a few dozen nodes and a few changes is a much different challenge than a huge network, like say every constellation in the night sky!
Here is an interesting study: http://publik.tuwien.ac.at/files/PubDat_198995.pdf
This paper presents a prototype, and user tests will be published soon in:
P. Federico, W. Aigner, S. Miksch, F. Windhager, M. Smuc:
"Vertigo zoom: combining relational and temporal perspectives on
dynamic networks";
accepted as talk for: 11th International Working Conference on
Advanced Visual Interfaces (AVI2012), Capri Island; 2012-05-21 -
2012-05-25; in: "Proceedings of the 11th International Working
Conference on Advanced Visual Interfaces (AVI2012)", ACM, (2012),
ISBN: 978-1-4503-1287-5.
http://ieg.ifs.tuwien.ac.at/~federico/pub.php
Your question is kind of general, I'm not clear exactly what kinds of analysis you are aiming for. The are several network analysis packages that have some dynamics capacity. Gephi is one. The networkDynamic and ndtv R packages provide tools for representing and visualizing dynamics as animations and static layouts (disclaimer: I'm a maintainer)

Resources