I have a group be Tableau files .twbx files that I would like to store them into Hadoop. Is there a way to store them and then access them via Tableau desktop?
Tableau Server is probably what you're looking for here.
Again, you've got a couple options. In my setup, I keep my extracts on Tableau server and then have a daily task to push them to a Git repo; that's more for backup though.
If you're looking to store the active workbooks in the S3 bucket, you can configure Tableau Server with "worker nodes" to accomplish this. See the link to the Tableau Server Admin. Guide below, it's got an entire chapter on setting up a distributed server.
Tableau Server Administrator's Guide
Related
I currently have a desktop PBIX file that I manually publish to Power BI Web.
I have to keep different version of the same PBIX file just to keep track of different sources based on environment such as Dev/QA/UAT/Prod etc
I have more than one data source for each environment i.e. in same PBIX file I have data coming from say Application Insights and REST API.
I scanned through power bi community to see how to do this but can't find relevant information. All pointers are for refreshing either the local PBIX or using Schedule Refresh option in Power BI Web.
Someone even wrote code to hit Publish code via OLE automation but that's not acceptable solution.
https://community.powerbi.com
I would like to automate this process such that
A. I can provide the data source connection string/ credentials externally based on the environment I want to publish it to.
B. Publish the report to Power BI web using a service account instead of my own.
Our current build and deployment tool set does allow use of PowerShell/ Azure CLI etc. Hence it would be helpful if the solution uses those.
Fetching data from sql Azure won't need refresh but it's expensive.
In one of the organizations I worked for they used views on sql Azure to accomplish this task
We would like to use Nifi registry with git as storage engine. In that case, i modified providers.xml and i was able to save the flows there.
Challenges:
There is no 2 way sync. We can only save the flows modified by Nifi user but if we modify the flow directly in git location, it will not be reflected on nifi registry
There is no review or approval process for Nifi registry. A user has to login to nifi-registry server, create a branch and issue a pull request.
As a workaround, we can delete the database file ( H2) and restart the nifi resgistry.
Lastly, everything should be automated in CI/CD like what we do for regular maven project.
Any suggestions ?
The purpose of the git storage is mostly to let user visualize the differences through tools like git hub, or any other tools that can support diffs, plus by pushing to a remote you also get a remote backup of the flow content. It is not meant to be modified outside of the application, just like you wouldn't bypass an application and go right into it's database and start changing data.
I am very new to Hadoop tools. So I am asking this doubt. I am using sqoop to push data from my relational DB to HDFS. Now as a next step I want to generate some reports using this data which is stored in HDFS. I have my own custom reports format.
I am aware that using HIVE I can get data from HDFS. but is it possible that I can design my own custom reports(Web UI) over this? is there any other tools I can use?
Else, is it possible to deploy an application( containing HTML GUI and java API's) on same machine and I can access it via HTTP and can see data present in HDFS?
You can use Tableau for better experience though it is paid but is the best in market,you can even customize your graph or report using tableau.You can get trial version of tableau from their site. You can use PowerBI from Microsoft which free and works well with Big data. Ambrose is created by twitter which is also having good support(I dind't tried this one).
Check Ambrose as this is what your are looking for. You can access it via HTTP url.
I need to set up a repository where multiple people can go to drop off excel and csv files. I need a secure environment that has access control so customers logging on to drop off their own data can't see another customers data. So if person A logs on to drop a word document they can't see person B's excel sheet. I have an AWS account and would prefer to use S3 for this. I originally planned to setup an SFTP server on an EC2 server however, I feel that using S3 would be more scalable and safer after doing some research. However, I've never used S3 before nor have I seen it in a production environment. So my question really comes down to this does S3 provide a user interface that allows multiple people to drop files off similar to that of an FTP server? And can I create access control so people can't see other peoples data?
Here are the developer resources for S3
https://aws.amazon.com/developertools/Amazon-S3
Here are some pre-built widgets
http://codecanyon.net/search?utf8=%E2%9C%93&term=s3+bucket
Let us know your angle as we can provide other ideas knowing more about your requirements
Yes. It does, you can actually control access to your resources using IAM users and roles.
http://aws.amazon.com/iam/
You can allow privileges to parts of an S3 bucket say depending on the user or role for example:
mybucket/user1
mybucket/user2
mybucket/development
could all have different permissions.
Hope this helps.
When I am installing Tableau server I am seeing this server. I am the administrator of the system but still I was seeing the message.Please help me in this regard
It's often safer to use the default NTAuthority\NetworkService account if you're installing Tableau Server for the first time, since this is (almost always) guaranteed to work and can always be changed later.
If you do want to proceed with using SriHarsha-PC\SriHarsha as the Run As account, then take a look at the following link from the Tableau Software Knowledge Base which lists all of the permissions that your chosen account will need in order to run Tableau Server correctly.
Tableau Server Run As account permissions
If that does not provide sufficient information, then create a support request and Tableau Technical Support will try and help resolve the issue.