I need some tips and examples for the following task:
I have a image data somewhere on my disk ... this data can be of type .svg/.bmp/.gif/.png ... lets say for the moment that all of them are of type .svg.
My task is to insert several of these image data in soecific places of the WordML that I am generating.
The generation of WordML is working superbly, but as I have NEVER before read or heard about inserting image data in wordML data ... I am kinda lost.
I am going forward with <maml:medialink> and <maml:image>.
Would be really nice of you, if anyone can give me a little introduction and support with this new venture of mine.
Thank you.
Jasmin
Related
I am going to build a chat bot from scratch with rasa.The biggest difficulty now is how to automate production training data.Training data includes nlu.md and stories.md .
I have tried rasa-nlu-trainer and Chatito,But there are still a lot of manual operations,If there are tens of thousands of corpora in the future.How to mark the data to make the data meet the data format of nlu.md and stories.md
Is there an automated tool or program to do this? Thanks a lot!
Well, if you're doing anything ML related, your data is the most important thing that you'll need for the model to learn from. And because we want the model to learn from that data, we create the data and then train the model with it. What you're asking for is for something to somehow create the data for it. It's precisely because there doesn't exist anything like that that we create datasets to train the AI on, by ourselves, so that the model learns form it. So, if you automate the data creation process, what do you expect the model to learn?
So, you can't create the data automatically because if that were possible, we would already have had Artificial General Intelligence (AGI) by now.
But if your goal is to just format the data then you can just write a script for that.
I have a usecase. I want to Integrate / Transform data from different / disparate sources without storing it. Data sources are database(oracle,db2,etc), Webservice(Rest/Soap), Flat files(CSV, XML, JSON), MQ dumps, mainframe systems. I want to pull data from these sources and do some kind of intelligent transformation and integration and provide it our customers. It looks like typical ETL scenario, but my situation is different. I am not allowed to store the data given by the desperate sources, that means, for simple example, i pull data from oracle, soap and a rest, and do all my intelligent transformations and integrations on the fly.
I browsed through google and technical stuffs but could not get convincing solution to my problem.
If you guys can help me giving some valuable insight on this problem and give suggestion and probable approaches to it.
Note: Data size from these sources can sometime be really huge.
Thanks in Advance
Take a look at htto://teiid.org
Thst is exactly what it does, and it is Open Source.
Talend Open Studio y a great solution as well, I'm using it and it's great and easy to make the ETL workflow.
https://www.talend.com/products/data-integration/data-integration-manuals-release-notes/
You can see a lot of help videos: https://www.youtube.com/results?search_query=talend+studio
Hello and thanks for checking out my question,
I am working on a project analysing film and visualizing the data I got from it. I'm quite new at programming and only have some basic experience in java and javascript.
For my project I want to store the db levels of a movie in a csv file, to later work with the data in processing. I couldn't find anything that wasn't too complex for me to comprehend for Mac (OSX.)
Help would be much appreciated!
Thank you.
You're going to have to break your problem down into smaller steps.
Step 1: Generating the CSV file.
There are probably a million different ways to do this, and that can be pretty confusing. But break this down into smaller sub-steps and then take those steps one at a time. Can you get a movie playing in Processing? There is a Video library that does just that. Then can you get the volume level every X seconds? You might start with a separate sketch that just prints something to the console every X seconds. For getting the volume, you might try out the Minim library. If that doesn't work, Google is your friend, and remember to keep breaking your problem down into smaller steps!
Step 2: Loading the CSV file.
Now that you have the CSV file, you have to load it into Proccessing. There are several functions in the reference that might come in handy. Again, start with an example program that just prints the values to the console. Get that working perfectly before moving on.
Step 3: Visualizing the data.
Now that you have the data in your Processing code, you can start thinking about how you want to visualize the data. Maybe a line chart that just shows the volume over time just to start with.
If you get stuck on a specific step, then try to break it down into smaller sub-steps. Create an example program that just tests one of those smaller sub-steps (also known as an MCVE), and you'll be able to ask a more specific code-oriented question. Good luck, sounds like an interesting project!
This may be a question for Survey Monkey, but I felt that someone here may have encountered something like this in past experiences. Is there a way to work with the API of Survey Monkey (SM), to add the information from the survey straight into a database of my own? I realize that I can generate the information into output files, but I was wondering if there was a way to directly access the information from the SM database. I feel like this might cause some privacy concerns for SM. Has anyone attempted this, or would the best option of mine be to create my own surveys without a third party website?
I had a similar issue and here's my solution.
I was doing health related surveys which contain HIPPA protected Personal Health Info. Zapier is NOT HIPAA safe, so the "zap the results over to Google Drive" solution didn't work.
So I wanted a quick n dirty way to grab SM survey data and begin to design a data structure to analyse and store this data. I figured that I would start with <1000 results, sort it out, then build out a bigger/fancier structure as needed.
I just downloaded CSV's of the SM individual responses, munged the downloaded CSV files to make a Python CSV reader happy, then wrote a Python 3.5 script to grab the survey data and spit it out into a couple of output CSV files designed for different analytic purposes.
It was really quick and easy to alter the Python script to deliver different subsets of data to different output files, and really quick and easy to see if these output (CSV or XLS) files really told me what I wanted to know.
This is a really quick and easy way to start analysing right away without spending too much time on procedural overhead. You can alter CSV (or XLS ) tables really quickly and easily, so you can mix and match data / derivative data as much as you want. A wise person once told me "don't think, do." So the more you analyse on small runs of data, the better your final Big Buildout In The Sky will look.
Yah, you can spend a lot of time writing and API and setting up a dbase, but if you are not completely happy with what you want out of the SM data, start small. Hope this helps.
I have a file transfer app that I've been writing and part of it involves a PySide GUI that'll show progress of file transfers. I have dictionary data being passed around while the transferring goes and I'm struggling with which variety of TableView/Widget and AbstractItemView/Model/etc.
In short, I'd like to be able to use the dictionary of data to populate the table and then have the table reflect changing values in the dictionary (like progress %, filesize, etc). Unfortunately ModelViews still elude me and at least a step in the right kind of direction would be most appreciated. Thanks in advance, SO!