I'm using Docpad and want to do increment a counter (for cachebusting of assets) every time a static site is generated.
I figured the easiest way would be t:
hook into docpad.coffee.writeBefore
increment a counter templateData.assetCounter
persist docpad.coffee.
Still figuring out the functionality that comes out-of-the-box with Docpad, so looking for a way to persist docpad.coffee to disk. Would that be a good idea at all?
Of course I could read/ write to disk using require('fs') but that may conflict/race with what docpad may internally be already doing (just guessing)
ideas?
That's a really cool idea! A plugin would be great for this, it could:
hook into docpadReady to load the persisting file
hook into extendTemplateData to add the current counter value to the template data
hook into writeAfter to increment the counter and save it to the persisting file
the persisting file could just be my-website/generateCounter.json
This way you don't have to modify your docpad.coffee file after each generation :)
Related
I am Developing an onpremise solution for a client without any control and internet connection on the machine.
The solution is to be monetized based on number of allowed requests(REST API calls) for a bought license. So currently we store the request count in an encrypted file on the file system itself. But this solution is not perfect as the file can be copied somewhere and then replaced when the requests quota is over. Also if the file is deleted then there's manual intervention needed from support.
I'm looking for a solution to store the state/data in binary and update it runtime (consider usage count that updates in binary itself)
Looking for a better approach.
Also binary should start from the previous stored State
Is there a way to do it?
P.S. I know writing to binary won't solve the issue but I think it'll increase the difficulty by increasing number of permutation and combinations for places where the state can be stored and since it's not a common knowledge that you can change the executable that would be the last place to look for the state if someone's trying to mess with the system (security by obscurity)
Is there a way to do it?
No.
(At least no official, portable way. Of course you can modify a binary and change e.g. the data or BSS segment, but this is hard, OS-dependent and does not solve your problem as it has the same problem like an external file: You can just keep the original executable and start over with that one. Some things simply cannot be solved technically.)
If your rest API is within your control and is the part that you are monetizing surely this is the point at which you would be filtering the licensed perhaps some kind of certificate authentication or key to the API and then you can keep then count on the API side that you can control and then it wont matter if it is in a flat file or a DB etc, because you control it.
Here is a solution to what you are trying to do (not to writing to the executable which) that will defeat casual copying of files.
A possible approach is to regularly write the request count and the current system time to file. This file does not even have to be encrypted - you just need to generate a hash of the data (eg using SHA2) and sign it with a private key then append to the file.
Then when you (re)start the service read and verify the file using your public key and check that it has not been too long since the time that was written to the file. Note that some initial file will have to be written on installation and your service will need to be running continually - only allowing for brief restarts. You also would probably verify that the time is not in the future as this would indicate an attempt to circumvent the system.
Of course this approach has problems such as the client fiddling with the system time or even debugging your code to find the private key and probably others. Hopefully these are hard enough to act as a deterrent. Also if the service or system is shut down for an extended period of time then some sort of manual intervention would be required.
I'd like to store some of my data in relative big files (a few GBs per file). I'd like to use event sourcing and save events related to these files, e.g. FileCreated: title, description, timestamp, author, personal, encryptionkey, etc. After a while some of the files won't be needed any longer, and they take up a lot of space. So in order to free up space, I need to delete them. Doing so is problematic, because I will have the history in the event storage, but not the file in the filesystem. Is there any way to keep integrity and somehow delete both? Or is there a best practice for this problem?
Since I did not get an answer, I try to answer this myself.
It is possible to remove an event from the history, you need to create a new event storage and filter the events for the same aggregate id you want to get rid of. After you are done, you can switch to the new event storage and remove the old one. Probably you need to replay projections as well. So it is very similar to a whole migration, it takes a lot of time. In the current case it is not problem if I need to do this only once every year or so. Another problem with storing this data in the event storage that either I stream it from there or I need to duplicate it in order to serve it. The latter one is not always a good solution, because sometimes it takes too much time to copy and in order to save the data you need to stream it anyways, otherwise you will be out of memory very fast. So the event storage should support streaming attachments.
Another solution to keep the relative big data in the files and display something like 404 not found, or file was removed because this and that. I see this frequently. In this case it is ok to keep the event in the storage and for example you can add a ContentRemoved event, where you can select the cause. Another option to hide the removed file, so it won't be listed by the app, this is usual I guess too. This solution has drawbacks too. Migration is more complex with this approach, because you need to move both the event storage and the files. If you remove a file by accident, you cannot undo it later unless you have the file in the backup. This can be fixed by delaying the actual file removal with a few days, so you can undo it if you change your mind. Another option to make a trash and files will be deleted only by emptying the trash.
I think both solutions are worth to consider and probably it depends on the actual project which one is better suited.
I've a problem with a document-based project in Cocoa. I've searched for a while but I didn't find anything which seems to resemble my goal. What I want to do is a (computation intensive) simulation program which generates a lot of data (probably in the order of GBs) and store them to the disk for a future visualization (so I cannot write/read the files all at once).
I created a document-based project (I don't know if it is the way to go...) with the idea to save all the data in many binary-files within a package, so the user can see it as a single file. I have already tried that part and I was able to save the document with NSFileWrapper. But the simulation-files are generated as the simulation is running. And here comes the problem.
There is a way to force the user to save the document and retrieve the path so I can put there all the files generated? Or it's best to save the simulation-files in a temporary location and then save the document periodically so that it saves all the files ready for saving? Or what can I do? It's not clear to me the usage of the nsdocument architecture in this case and what it's a good way to achieve my goal.
The document has also another couple of files in which there are the simulation parameters and the initial state, so I can resume the simulation at a later time.
I hope this is not a stupid question. So, I simply want to duplicate a file from the Isolated Storage to be used as a backup. However, speed is really important in this case and I wondered what's the fastest way to do that. Should I open the file from the IS, read it to a stream, then create a backup file and write to it, well from what I've seen so far this will take at least half a second which is a lot.
There's no API for copy/duplicate so yes, your answer is the best way.
If you want to avoid the half a second delay then you'll need to do that via your application design - e.g. writing new data to a new file, or perhaps using smaller files.
If you're interested in the details of IsolatedStorage performance, then this blog has done a superb analysis:
http://appangles.com/blogs/mickn/wp7/?p=6
Does midiOutPrepareHeader, midiInPrepareHeader just setup some data fields, or does it do something that is more time intensive?
I am trying to decide whether to build and destroy the MIDIHDR's as needed, or to maintain a pool of them.
You really have only two ways to tell (without the Windows source):
1) Profile it. Depending on your findings for how long it takes, have a debug-only scoped timer that logs when it suddenly takes longer than what you think is acceptable for your application, or do your pool solution. Though the docs say not to modify the buffer once you call the prepare function, and it seems if you wanted to re-use it you may have to modify it. I'm not familiar enough with the docs to say one way or the other if your proposed solution would work.
2) Step through the assembly and see. Don't be afraid. Get the MSFT public symbols and see if it looks like it's just filling out fields or if it's doing something complicated.