File download resuming component for use in Auto Updating - download

It turns out install shield doesn't support file download resuming.
Is there a .net component that provides this functionality?

We use the AppLife Update component. It is specifically for .NET and can resume interrupted downloads and do much more. We found it easy to use and flexible.

I don't think so,but it's not that hard to write one . There are a lot of HTTP Clients already . All that you would need to do is checking to see if the web server actually supports resuming . Check this blog entry for a little info about HTTP and download resuming :
http://www.west-wind.com/WebLog/posts/244.aspx
All you would need to do is store the number of bytes read so far somewhere ( file , database ) , and read that when you're starting your application again . Be sure to check the md5 for the file ( if it has one ) , to ensure no errors occured .

Related

I have a problem on sending Downstream to a gateway

I'm using a dragino DLOS8 gateway and a dragino end node lt-22222-l. I wrote a script to read and show the values in my end node's inputs but I couldn't control my relays. I found an example of a script (in a dragino article titled Communication with ABP End Node) showing this function to control them( it controlled the digital outputs but I changed it to relays) which is:
echo "${DEV_2},imme,hex,030101" > /var/iot/push/down
I even tried with more specified one:
echo "${DEV_1},imme,hex,030101,20,1,SF12,869525000,1" > /var/iot/push/down
in the article it indicates that I have to create a file in the directory /var/iot/push for downstream purpose. I tried using winscp and the command touch down but it deleted few seconds after. if there is anyone that used those devices or knows about this please help me.
I had a similar problem with Dragino, also with device logfiles in /var/iot/channels directory. Got information from Dragino support that those files are "consumed" by MQTT and TCP processes, so are periodically deleted: I undestand that LoRaWAN or MQTT or TCP application has to work with those files as soon as they are generated.
Notice that "imme" sends downstream immediately to C type devices, maybe "time" (downstream after receiving data from node) is better for your application.

How to plug in a process of identifying sensitive information somewhere in ETL pipeline?

Hope you are doing well !
We have already developed ETL pipeline using apache NiFi. Which gets trigger only when client uploads source data file from portal.After that, the data present inside source file goes through various layers,gets transformed and stored back to warehouse(i.e. hive).
Goal : To identify sensitive information and mask it so that end user won't see actual data.
Identify Sensitive data & masking strategy : We will make use of open source tool to achieve this goal as follow.
Data steward studio : This tool allow me to identify sensitive information and tag it properly.
Apache Atlas : Once data steward user has confirmed the tag then that tag will be pushed into Apache atlas.
Apache ranger : At the final, we can define tag based-masking policy using Apache ranger which will allow or deny to specific user. 
For more details on above solution , please visit link.
https://www.youtube.com/watch?v=RzEfLwJaLsc
Problem : In order to feed the data to DSS tool, it should be loaded first in hive table. That is fine. But we cannot stop the existing ETL flow in-between and then start identification process of sensitive information. The above solution must require some manual process which i want to get rid of and make it automated.that is, it should be plugged in somewhere within NiFi pipeline.But so far, as per my understanding DSS do not allow us to do something like that.
Manual Process :
Create Asset collection
Accept/Reject suggested tags within DSS.
If we cannot plug identification process in pipeline, then client sensitive data will be exposed to everyone and visible to everyone in team. I want something where we can de-identify sensitive data before it actually get loaded into HDFS or hive tables.
Please write your response to me on the same problem, if anyone has already worked into this particular area.
  
I did not test it, but here are my thoughts on this challenge.
Set up the system such that data is NOT visible to everyone(or anyone) by default
Load the data into hive
Let the profilers run and accept its suggestions
Open up the data to those who should have access (except for the things found by the profiler)
There are still some implementation details to work out (e.g. How to automate step 3/4 and whether you can just solve this with tags or whether the data needs to sit in a staging area first). But I hope this steers you in a good direction.
One idea might be to use EncryptContent of nifi (https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.EncryptContent/). Then the values loaded into Hive will be encrypted in the first place and would not be visible to the stewards. Once the tagging has been done - then in the subsequent part of the pipeline (where I'm assuming you're using nifi as well) - you can decrypt back content as required.

Using maketorrent in libtorrent examples

So I am trying to build an application that uses libtorrent. However, before I start I would like to make sure that I have compiled the lib correctly and that I have a functioning environment for testing.
I am currently running a VM with opentracker and I try to connect using the example client in libtorrent.
First I start by creating a .torrent file using libtorrent (I am currently not sitting in front of a computer with libtorrent available so I might be remembering the exact commands a bit wrong):
maketorrent.exe dummy.txt -t "http://10.XXX.XXX.XXX/announce"
This gives me a .torrent file called a.torrent. Opening the file everything looks ok, the bencoding is correct and the announce address is there.
Next I try to add it to the example client hoping it starts to seed:
client_test.exe a.torrent
Everything starts up OK, but no tracker is found. Then if I press t to show tracker information I see an error (maybe not the exact phrasing):
Alert: {null} unsupported URL protocol
OK, so maybe something is wrong with how I built libtorrent. So I get the Halite client instead since that is also supposed to be build upon libtorret. But there I have the same problem.
So I have a look at the code and found where this error message is generated. The code is checking if I am supplying an address using the HTTP or HTTPS protocol, which I am. So could it be that I am not able to use a bare IP-address or am I doing something wrong?
I found the problem. It was not a problem with the IP address or the torrent itself. Instead it was a problem with caching.
The first time I added the torrent I used http:\XXX.XXX.XXX.XXX instead of http://XXX.XXX.XXX.XXX which didn't work. However whatever change i did to the torrent file after that did not stick. It was always falling back to that original file until I removed the .resume folder.

how to delete file from the server after downloading it ?or how can i store the file to client machine directly from output stream?

am using liferay custom portlet and in that am using jasper report now my problem is that how can i download the pdf report directly on the client machine
right now am storing the file at server first.then provide url for downloading the pdf to user.but how can i directly store the file to client machine if i have pdf file's outputstream .
ot if i can know some how when user click on the download link and after downloading the file if i want to delete the donlowded file from the server then how can i do it.?if any one can guide me...
I'm not sure what you're asking for is possible, but I would be interested in seeing someone correct that statement though.
Servers really shouldn't be directly storing files on a client machine as that violates the intent of the client server relationship. A client has to make a request for the file and then the client can save that file (eg like a ftp download). Servers just don't manipulate client machines as they see fit.
As far as knowing when a file is downloaded, there isn't anything in a portlet you can do to detect that. You can use ResourceRequest and serveResource method to serve a file, but nothing in the portlet API will inform your portlet that the download is complete or that it wasn't interrupted by something.
As an alternative you might try simply having a cron job that will clean out old files. In this case, make sure to inform users how long they have to successfully download the file.

Desktop SPARQL client for Jena (TDB)?

I'm working on an app that uses Jena for storage (with the TDB backend). I'm looking for something like the equivalent of Squirrel, that lets me see what's being stored, run queries etc. This seems like an obvious thing to need, but my (perhaps badly phrased) google queries aren't turning up anything promising.
Any suggestions, please? I'm on XP. Even a command line tool would be helpful.
Take a look at my Store Manager tool which is part of the dotNetRDF Toolkit which I develop as part of the wider dotNetRDF project I maintain.
It provides a fairly basic GUI through which you can connect to various Triple Stores including TDB provided that you expose your dataset via Joseki/Fuseki. You need to have .Net 3.5 installed to run the apps in the toolkit.
If you don't already expose your TDB dataset via HTTP try using Fuseki as it is ridiculously easy to use and can be run just on your local machine when necessary to make your TDB store available via HTTP for use with my tool e.g.
java -jar fuseki-0.1.0-server.jar --update --loc data /dataset
Please see the Fuseki wiki for more information on running Fuseki and the various options. In the above example Fuseki is run with SPARQL Update enabled (the --update flag), using the TDB dataset located in the directory data (the --loc data argument) and with a base URI of /dataset for the data.
Once running you can use my tool to connect to a Fuseki server by going to File > New Generic Store Manager, selecting the "Fuseki" tab from the dialog that appears, entering the URI http://localhost:3030/dataset/data and then clicking "Connect to Fuseki".
Twinkle is a handy SPARQL client : http://www.ldodds.com/projects/twinkle/
As it happens I'm working on something similar myself, but it still needs a lot of work (check back in a month :) http://hyperdata.org/wiki/Scute
first download jena fusaki from
https://jena.apache.org/download/index.cgi
un-zip the file and copy the "jena-fuseki-1.0.1" to c drive
open cmd
type for accesing the folder
"cd C:\jena-fuseki-1.0.1"
then type
"java -jar fuseki-server.jar --update --loc data /dataset"
at last open a browser and type
"localhost:3030/"
remember you must first declear the enviorment verible(located in system poperties then advance tab)
and edit variable name call "Path" in the "System verible" to
"C:\jena-fuseki-1.0.1"
I also develop a SPARQL client, Open Source in Java Swing: EulerGUI.
In fact it does a lot more, see the manual:
http://eulergui.svn.sourceforge.net/viewvc/eulergui/trunk/eulergui/html/documentation.html
For the SPARQL feature, better take the EulerGUI minimal build:
http://sourceforge.net/projects/eulergui/files/eulergui/1.11/

Resources