I have create a bot which tells release data for any movie.
Should I be using conventional SQL DB to keep bot answers up to data?
I read about Document DB, is it good to try?
My Suggestion would be use some API if your data source has any. Like IMDB or some other source of data.
With that you will not have to manager database.
If you use DocumentDB or any other DB you have to make sure that data in that DB is upto date now for that you will reply on some API or web scraping. However you can directly do that in your bot itself.
Related
This is for a project on Bot Framework Composer (not SDK, so i'm using built in telemetry export settings).
I am looking for the best way to store event logs from bot conversations for analysis. From what I've researched, the method recommended is going through Application Insights, which I activated and tested. The data I require seems to be all captured in table customEvents.
The issue is I need to be able manipulate the data for analysis. But in Application Insights it's read only (and possibly purge via API). I need to be able to add tables, edit text, etc. I have a lot of experice with postgreSQL so that's my first choice for bot log storage.
So my question is, what is the efficient way to get the customEvents data table that is in application insights to a postgres database? From what I see, application insights only exports to azure storage? But that does not have a database option. And if I understand some of the pipelines suggested, they copy data to storage, and then copy to a database. Isn't that a lot of storage cost, as same data will be in application insights, storageBlobs AND postgres?
What is the best pipeline? The goal is to have non-redundant pipeline that transfers event data that is in 'customEvents' to a postgres table with same columns.
(If there is a way to redirect data that goes to customEvents in application isights directly to postgres table that would be perfect too. )
There is no such to redirect data from application insights directly into postgres table.
The first solution is continuous export to azure storage as you know. Storage blob does not cost very much and you can clear the old data periodically to reduce the cost.
Another way is to use the application insights query api. To do that, you need to write your own logic to query the custom events from application insights, then insert them into your DB by your code.
Are there any concerns with using Snowflake as the data repository for a web API from an enterprise architecture perspective?
I think the question to be asked is how are you going to use the data. It is not clear what you mean by web API data repository. If you are talking about the API interaction data, then Snowflake is not the right choice for that. You should look for a transactional data store for such use cases. However, from that data , if you want to derive insights and analytics you can ingest the transactional data to Snowflake and build your analytics layer on top of it. But the question will be why would you like to do that, most of the API products have that analytics engine already built in their product.
I have my website in PHP and DB in MySQL. I want salesforce users to search on my database from within their salesforce. For that, heroku connect seems to be the option. So i am thinking of converting my MySQL DB to PostGre and then use heroku connect to share my data with my salesforce account. The question i have is, how can i share same data with other salesforce users ? Those users are my website clients and i don't want them to go through this process of heroku connect. Is there was of sharing my data with other salesforce users ?
You cannot and should not expose your database directly to your customers. That would allow them to change the data as well as read it.
Your solution here is to create a public API which exposes endpoints that will make it possible for anyone (with proper authentication hopefully) to query your data.
There are many ways you can design an API, whether it's a REST or a GraphQL one. This is something which can absolutely be done in PHP though.
So I want to create a live dashboard (probably a node based app, with react front end). This dashboard will display performance data from a series of websites from the data gathered using Googles Lighthouse Performance audit tool.
The Lighthouse tool published a JSON file with a bunch of keys and values for performance analytics.
I will using something like d3 or chart.js to eventually render this data.
My issue is with how to provide this "live" data to the web front end.
here is my idea so far (just need to know if it is viable.)
A Jenkins job will run my dockerised scrit which uses the lighthouse adk to give it a site and return a json performance report.
The jenkins job will put the json file into an S3 bucket.
A lambda will be triggered each time an item is added to the S3 bucket
The lambda will extract the desired values from the json report and write these to dynamo db
Dynamo DB stream will be used to get the latest values from the dynamo table.
The web front end will query the dynamo DB streams and render the data into chars and graphs.
Can you see this process working? would this give me a sort of "live" data feed? the idea is that the performance reports will be created multiple times during the day
I don't think the DynamoDB stream will work the way you think, unless I'm totally misunderstanding something about DynamoDB streams. How would DynamoDB push streaming data to a web browser?
I would recommend having the Lambda function add a timestamp to each record it inserts into DynamoDB. Have the timestamp field be the sort key for the primary index of the table.
Next have another Lambda function that queries the DynamoDB table for the latest record(s) using the timestamp field. Expose that Lambda function via API Gateway.
Finally have the web front-end make API calls to the endpoint you created in API Gateway to retrieve the latest performance data.
"live" can mean different things to different people and for infrequently changing data (a few times a day is not frequent compared to an interactive chat) the overhead of managing sockets, etc. might not be worth it compared to simply refreshing the page.
I don't see why you need Dynamo here; you can just read from S3 directly and perhaps use versioning on objects to track the different stats for each run.
If you genuinely want browser-based notifications you can look in to AWS IoT, and have a Lambda subscribed to the S3 bucket where the results are run that extracts the values and publishes them to IoT, which can expose a web socket for your browser based app.
I am going to do my next project on ChatBot for my client. I am a .net person, so planning to use MS Bot with C#.
My question is on creating the Database part. We have a existing Database which has data related to Project Management and others which is currently being used for a web application.
What we are expecting from the Bot is, if a person(say, a manager of particular project)wants to know the count of people under him, he can use the Bot to get the count, rather than go to the web application and figure out.
How will this database work for Bot application.
How will I create the table structure to identify the questions to be asked to BOT and its responses and fetch data and then display to user.
How can we make my Bot fetch data from this DB, if someone asks question.
How can I store these responses.
I am totally confused. My Client do not want to use LUIS, but want similar thing to be built with our Database and can be called via RestAPI.
Kindly help with any article or advise to start my work with.
Consider the bot you made as a back-end server written in C#.
It just gets requests and sends responses.
So nothing special is needed for connecting it to DB.
Simply connect it to DB as you'd connect a ASP.NET website to DB.