I want to develop some mobile app. Users can put or take their item photos and my app will say this is something.
for example; When I take pictures of my chair and ask it, my app should say this is chair.
I want to query with images and I want to get my results with keyword.
Are there any API to do this ? or What should I use to build this app ?
Any recommendation appreciated.
Have a nice coding !
What you are looking for is neural networks. Try researching the google neural networks and see if that is something you're interested in exploring.
Warning, it may be a rabbit hole. :)
Some starter links:
Analysis of Neural Networks
Google's TensorFlow
Related
There's a crucial part in the process that says, the best place for the chatbot to learn is from real users, what if I already have that data, and would like to test the model on it.
Think of Interactive Learning, but in scale and possibly automated. Does such a feature already exist within RASA?
Think of Interactive Learning, but in scale and possibly automated.
Are you referring to something like reinforcement learning? Unfortunately something like that currently doesn't exist. Measuring the success of conversations is a tough problem (e.g. some users might give you positive feedback when the bot solved their problem, while others would simply leave the conversation). Something like external business metrics could do the trick (e.g. whether the user turned out buying something from you within the next 24h), but it's still hard. Another problem is that you probably want to have some degree of control over how your chatbot interacts with your users. Training the bot on user conversations without any double checking could potentially lead to problems (e.g. Microsoft once had an AI trained on Twitter data, which didn't turn out well).
Rasa is offering Rasa X for learning from real conversations. The community edition is a free, closed source product which helps you monitor and annotate real user conversations quickly.
Disclaimer: I am a software engineer working at Rasa.
A lot of time goes into building a good machine learning model. When it is then converted to a CoreML model and shipped with an iOS app, supposedly it will be rather easy to extract it from the app that is available for download on the App Store, right?
Any ideas on how to make the extraction at least a little bit harder?
It is really easy to extract the mdoel from the app. Just copy-paste the mlmodelc folder into your own app and you can use the model.
To protect the model you'll have to encrypt it in some way. There is no easy API for this.
You may want to weigh the effort of protecting your models against the risk. What is the likelihood someone will steal your model?
Your model is your intellectual property. If you find that another app is using your model, they are infringing your IP rights and you can sue them for damages. That in itself should already be a deterrent against people stealing your model.
Found a research paper 'Protect your Deep Neural Networks from Piracy' by Mingliang Cheng and Min Wu (paper on researchgate.net) that looks very promising.
I would like to have a superficial idea about the methods used by Google to perform image annotation and how these methods are related to each other. I couldn't find anywhere else this information, except some user's guesses, and I would like to have something more reliable.
Thank you
I believe much of the API's backend is done with tensorflow. (https://cloud.google.com/blog/big-data/2016/05/explore-the-galaxy-of-images-with-cloud-vision-api, https://cloud.google.com/blog/big-data/2016/02/google-cloud-vision-api-available-to-all)
--> I'm guessing that there's some big ol' deep convolutional neural networks trained from google images, implemented with tensorflow (https://www.tensorflow.org/, http://kaptur.co/what-googles-new-open-source-tensorflow-and-cloud-vision-api-mean-for-photo-app-developers/).
Some tensorflow info about deep convolutional neural networks: https://www.tensorflow.org/versions/r0.9/tutorials/image_recognition/index.html
Recently I have been looking into the development of social networks and I often find references to pump.io. There is however very limited information available on what pump.io actually is. The official website says nothing more than: "It's a stream server that does most of what people really want from a social network." I found some more information on this website (http://slid.es/evanp/understanding-pumpio/fullscreen#/) but that still doesn't say a lot to me.
Could someone please provide an elaborate discussion on what pump.io actually is (and does) to someone who does not know anything about (activity) stream servers? Maybe the better question is: "What is an activity stream server?"
Yeah, the term is one a lot of people are unfamiliar with and it makes a couple of distinctions that aren't immediately obvious even if you use and post to a pump.io site.
pump.io, as it is distributed, is really two programs with different sets of functions. One is the Activity Stream Server and the other is the Web Client.
At the risk of being pedantic, let me define each of the words. I know you know what the words mean, but I hope the specific contexts/usage will help:
Server: a program which distributes information (usually) across a
network.
Stream: a (usually) chronological series of some sorts of pieces of information.
Activity: a description or depiction of something someone is doing.
The Activity Stream Server is a program which distributes (server) a chronological series (stream) of posts about stuff people do (activities).
The distinction is important because the website part of a pump.io website is a client for the pump server—essentially no different from a desktop or smartphone pump.io client. It listens to the pump's stream of posts and sends new posts to the pump using the same API and data formats that standalone applications—or other pumps—do.
You could actually totally decouple the Web Client and have a fully-functioning pump.io instance without any website. Users on other pump sites could see your posts and you could see theirs, and you could comment back and forth. It would make no difference.
ActivityStream is a JSON-based data format to describe "activities". The specification of ActivityStream 2.0 can be found at https://www.w3.org/TR/activitystreams-core/ and the vocabulary of activities at https://www.w3.org/TR/activitystreams-vocabulary/. To get the feeling of how the data format looks like you can have a look at the few examples at https://www.w3.org/TR/activitystreams-core/#examples. More examples can be found throughout the two specifications.
pump.io is an activity stream server that does most of what people
really want a social network server to do.
That's a pretty packed sentence, I understand, but I can try to unwind
it a little.
"Activities" are the things we do in our on-line or off-line
life—waking up in the morning, going for a run, tasting a beer,
uploading a photo, adding a friend, eating a burrito, joining a group,
liking a blog post.
pump.io uses a simple JSON format to represent all these kinds of
activities and many more. It organizes activities into streams—time
ordered lists of activities, with the newest first. Most streams are
organized by theme, like: all the things that my friends did, or all
the things that I did, or all the things anyone has done to this
picture.
Programmers use a simple API to connect to a pump.io server and add
new activities. pump.io automatically organizes the activities into
streams and makes sure the activities get to the people who are
interested in them.
And, really, that's what we want from a social network
Behrenshausen, B. (2013). 'Interview with Evan Prodromou, lead developer of pump.io'. Retrieved from: https://opensource.com/life/13/7/pump-io
If you peer a few centimeters down the page on the official website, you'll see:
What's it for? I post something and my followers see it. That's the
rough idea behind the pump.
There's an API defined in the API.md file. It uses activitystrea.ms
JSON as the main data and command format.
You can post almost anything that can be represented with activity
streams -- short or long text, bookmarks, images, video, audio,
events, geo checkins. You can follow friends, create lists of people,
and so on.
The software is useful for at least these scenarios:
Mobile-first social networking
Activity stream functionality for an existing app
Experimenting with social software
Those last 3 items hopefully answer your question.
Currently, you can:
install the nodejs-based pump.io server
(or) sign up for an account on a public service
post notes and pictures with configurable permissions
log in to web and client applications using your webfinger ID
I want a real and honest opinion what do you think of Google Visualization API?
Is it reliable to use becasue when i was reading the documentation i noticed that there are alot of issues and defects to overcome and can i use it to retrieve data from mysql database.
Thank you.
I am currently evaluating it. As compared to other javascript data visualization frameworks, i think it has a lot going for it:
dynamic loading is built-in
diverse, many things to choose from.
looks really great!
framework mostly takes care of picking whatever implementation fits the current browser
service based, you don't need to download anything in advance
unified data source: just create one data table, and have multiple visalizations draw from that data.
As a disadvantage, I'd like to mention security. I mean, because it's all service based, it is not so transparent what happens when you pass data into these API calls. And as far as I know, the API is free, but not open source, so I can't really check what is going on behind the covers.
I think the Google visualization API really shines if you want to very quickly whip up a visualization gadget for use in a blog or so, and you are not interested in deploying all kinds of plugins and libraries (for eaxmple, with jQuery based frameworks, you need may need to manage multitple javascript libraries that work together to deliver the goods). If on the other hand you are creating an application that you want to sell, you might want to keep more control over what components you are using, and I would probably consider using something like Flot
But like I said, I am only evaluation atm, I am not using this in production.
Works really great for me. Can be customized fairly easily. Haven't seen any scaling issues. No data is exposed so security should not be an issue. - Arunabh Das
One point I want to add here is that, Google Visualization API cannot be downloaded, its not available for offline usage. So application which is going to use it must be always connected to internet, otherwise I think it wont be able to render charts. Due
to this limitation, this API cannot be used in some applications for which internet connection is not available.
I am currently working on a web based application that will have the Google Visualization API added to it and from the perspective of a developer the Google Visualization API is very limited in what you can do with each individual Chart and if I had a choice I would probably look at dojox charting just because of the extra flexibility that the framework gives you.
If you are doing any kind of large web application that will use charting extensively then I would not recommend the Google Visualizations API it does not have enough flexibility for a large web application.
I am using Google Visualization API and I want to stress that they still won't let you download it, which means if their servers are down, your app will be down if you depend on it. I have been using it for about 4 months, and they have crashed once me once so I'd say they pretty reliable and their documentation is really nice.