When deleting SMS, they don't get deleted - sms

I connected a GSM/GPRS-modem to my microcontroller and everything works fine.
When I want to delete all messages in the ME storage, I should use this command:
AT+CMGD=1,4
->OK
The deleteflag '4' indicates that I want to delete all messages and flag '1' is overridden. However, when I check if the storage is empty, I get:
AT+CPMS?
+CPMS: 8,100,8,100,8,100
Indicating that the memory is still occupied and no message got deleted.
Does anyone know what I'm doing wrong?
Thanks in advance!

I use the CMGD command to delete messages.
As far as I can see the CMGR command is used to read messages and not delete them.
Edit (since you were using the CMGD command)
It is possible that your modem doesn't support multiple parameters to the CMGD command - my Siemens modem does not (or if it does it doesn't list it in the AT command set document). Instead of deleting all the messages in a single command I do a CMGL to get read messages, parse the results to get the index and them delete them one at a time using CMGD. If you do find another way I'd be interested.
Use the AT+CMGD=? command to find valid values of the parameters.
Edit (since you verified the modem supports CMGD with two parameters
I don't know what the problem is.
I did notice that your CPMS command gives different results to mine, example of mine:
AT+CPMS?
+CPMS: "SM",10,10,"MT",12,35,"MT",12,35
Yours doesn't have any storage memory string. I'm guessing the command you actually did was:
AT+CPMS="ME"
When you switch to ME storage and do a CMGL command does it list the undeleted messages?

Try to set memory to ME using "AT+CPMS="ME"" and then retry your delete command with flag=4. I guess it will work. It not then run "AT+CSAS" to save the earlier setting and the retry delete. I could not test it as there is no SMS in my ME storage area. Let me know if it worked.

Related

Parse a log file remembering last run

I have a simple script that opens a file (log file), parses through it looking for specific log entries/keywords and for each entry that matches it triggers an alert.
The problem that I am trying to solve is that I would like to modify the script to remember the alarms that were already sent when it was last run, so that if the script re-runs it won't keep sending an alert for previously sent alerts.
The coding language is Golang, what are the a valid approach to do this? A database sounds like overkill, but I don't know what other alternatives are out there?
It depends on the nature of the log file: server log (classic) or transation log.
Even assuming the former, it depends on its Log Management (long term retention, rotation, ...)
Assuming a classic log files whose data are appended (not overwritten), a simple approach would be to generate in a file the line where the alert is found.
At the next run, if that line matches one stored in that special "flag" file, the alert would not be sent again.

Apache Nifi GetTwitter

I have a simple question, as I am new to NiFi.
I have a GetTwitter processor set up and configured (assuming correctly). I have the Twitter Endpoint set to Sample Endpoint. I run the processor and it runs, but nothing happens. I get no input/output
How do I troubleshoot what it is doing (or in this case not doing)?
A couple things you might look at:
What activity does the processor show? You can look at the metrics to see if anything has been attempted (Tasks/Time) as well as if it succeeded (Out)
Stop the downstream processor temporarily to make any output FlowFiles visible in the connection queue.
Are there errors? Typically these appear in the top-left corner as a yellow icon
Are there related messages in the logs/nifi-app.log file?
It might also help us help you if you describe the GetTwitter Property settings a bit more. Can you share a screenshot (minus keys)?
In my case its because there are two sensitive values set. According to the documentation when a sensitive value is set, the nifi.properties file's nifi.sensitive.props.key value must be set - it is an empty string by default using HortonWorks DataPlatform distribution. I set this to some random string (literally random_STRING but you can use anything) and re-created my process from the template and it began working.
In general I suppose this topic can be debugged by setting the loglevel to DEBUG.
However, in my case the issue was resolved more easily:
I just set up a new cluster, and decided to copy all twitter keys and secrets to notepad first.
It turns out that despite carefully copying the keys from twitter, one of them had a leading tab. When pasting directly into the GetTwitter processer, this would not show, but fortunately it showed up in notepad and I was able to remove it and make this work.

Shell script - Multiple user appending the same file at the same time

I have a script in my server that displays its performance to the user in a text file. When the same script is executed in parallel by multiple users the information in the text file gets mixed up. I append many details of the server in the text file which takes roughly less than a minute to come up with the output. If i do file locking will it hit the performance or is there any way i need to look upon.
Please help me on how to proceed .
Thanks
Balakrishnan
You could make use of a message queueing system:
POSIX message queues:
http://www.linuxhowtos.org/manpages/7/mq_overview.htm
Beanstalkd: http://kr.github.io/beanstalkd/
POSIX Message Queue for Ruby: http://rubygems.org/gems/posix_mq
Perl: http://search.cpan.org/~iljatabac/POSIX-RT-MQ-0.03/MQ.pm
Python IPC: http://semanchuk.com/philip/posix_ipc/
Other threads:
Are message queues obsolete in linux?
https://unix.stackexchange.com/questions/70837/linux-command-to-check-posix-message-queue
https://stackoverflow.com/questions/40296/what-is-the-best-free-tool-for-managing-msmq-queues-and-messages
The idea is to create a server process that would receive messages and store on a buffer. It would only print a line on the logfile everytime a message from a process already has a complete line.
FLoM http://sourceforge.net/projects/flom/ can manage the lock you need: it's easy to use, it's fast, the same resource can be locked/unlocked by different users and it implements a rich lock model.
This example use case could give you some ideas about the tool: http://sourceforge.net/p/flom/wiki/Use%20Case%206/
Cheers
Ch.F.

Is it possible to create a file that cannot be copied?

To restrict the scope, let assume we are in Windows world only.
Also assume we don't want to play with permission policy.
Is it possible for us to create a file that cannot be copied?
Thank you in advance.
"Trying to make digital files uncopyable is like trying to make water not wet." ~ Bruce Schneier
No. You can't create a file that a SYSADMIN can't copy. You could encrypt it, though.
Well, how about creating a file that uses up more than 50% of the total space on that machine and that is not compressible?
For instance, let us assume that you want to save a boolean (true or false) in such a fashion.
Depending on its value, you could then write a bit stream of ones or zeroes and encrypt said stream using some kind of encryption algorith, such as AES in CBC mode. This gives you the added advantage of error correction. Even in case of massive data corruption, you should be able to recover your boolean by checking whether ones or zeroes are prevalent in the decrypted stream.
In that case you cannot copy it around (completely) on the machine...
Of course, any type of external memory that can be added to the system would pose a problem in this scenario. But the file would be already encrypted, so don't worry about it too much...
Any file that can be read can have its contents written to another location (such as another file, i.e. copied).
The only thing you can do is limit who/what can read the file.
What is the motivation behind? If it is a read-only file, you can have it as embedded resources within your assembly.
Nice try, RIAA.
But seriously, no you can not. It is always possible to copy, you can just make it it more difficult for people to make sense of the file or try to hide it using like encryption. Spotify does it.
If you really try hard thou, you cold make a root-kit for windows and use it to prevent windows from even knowing about the file and also prevent copies. The file will still be there and copy-able by other tools, or Linux accessing the ntfs.
If in a running process you open a file and hold an exclusive lock, then other processes cannot read the file until you close the handle or your process terminates. However, as admin you could forcibly remove the lock handle.
Short answer: No.
You can, of course, use security settings to limit who can read the file. But if someone can read it, then they can copy it. Even if you found some operating system trick to disable "ordinary" copying, if someone can read the file, they can extract the contents, store it in memory, and then write it somewhere else.
You can encrypt the contents so it's only useful to your own program, that knows how to decrypt it.
That's about it.
When using Windows 7 to copy some files from a hard drive, certain files popped up a message saying they could not be copied in their entirety; certain data would be omitted from the copy. I suspect that had something to do with slack space at the end of the files, though I thought the message was curious. I would have expected the copy operation to just ignore the slack space.
If you are running old (OLD) versions of windows, there are certain characters you can put in the filename that make it invalid, not listed in folders, etc. They were used a lot in the old pub ftp days of filesharing ;)
In the old DOS days, you used to be able to flag disk sectors as bad and still read from them. This meant the OS ignored the sector in question but your application would know where to look and be able to get the data. Not sure this would work these days.
Another old MS-DOS trick was to put a space character in the middle of the filename (yes, spaces were valid characters for filenames). Since there was no method on the command line to escape a space, the file couldn't be copied using the DOS commands.
This answer is outside Windows so yeah
Dont know if its already been said but what about a file that is an inseperable part of the firmware so that it is always on AND running, perhaps it has firmware that generates a sequence that is required for the other . AN incedental effect of its running is to prevent any 80% or more of its code from being replicated. Lets say its on an entirely different board, protected by surge protectors, heavy em proof shielding and anything else required to make it completely unerasable.
If its possible to make a program that is ALWAYS on and running as long as the copying software is running then yes.
I have another way and this IS with windows. I will come to your house and give you a disk, i will then proceed to destroy every single computer you put the disk into. This doesnt work on XP
Well technically you could create and write to a write-only network share.

Verify whether ftp is complete or not?

I got an application which is polling on a folder continuously. Once any file is ftp to the folder, the application has to move this file to some other folder for processing.
Here, we don't have any option to verify whether ftp is complete or not.
One command "lsof" is suggested in the technical forums. It got a file description column which gives the file status.
Since, this is a free bsd command and not present in old versions of linux, I want to clarify the usage of this command.
Can you guys tell us your experience in file verification and is there any other alternative solution available?
Also, is there any risk in using this utility?
Appreciate your help in advance.
Thanks,
Mathew Liju
We've done this before in a number of different ways.
Method one:
If you can control the process sending the files, have it send the file itself followed by a sentinel file. For example, send the real file "contracts.doc" followed by a one-byte "contracts.doc.sentinel".
Then have your listener process watch out for the sentinel files. When one of them is created, you should process the equivalent data file, then delete both.
Any data file that's more than a day old and doesn't have a corresponding sentinel file, get rid of it - it was a failed transmission.
Method two:
Keep an eye on the files themselves (specifically the last modification date/time). Only process files whose modification time is more than N minutes in the past. That increases the latency of processing the files but you can usually be certain that, if a file hasn't been written to in five minutes (for example), it's done.
Conclusion:
Both those methods have been used by us successfully in the past. I prefer the first but we had to use the second one once when we were not allowed to change the process sending the files.
The advantage of the first one is that you know the file is ready when the sentinel file appears. With both lsof (I'm assuming you're treating files that aren't open by any process as ready for processing) and the timestamps, it's possible that the FTP crashed in the middle and you may be processing half a file.
There are normally three approaches to this sort of problem.
providing a signal file so that when your file is transferred, an additional file is sent to mark that transfer is complete
add an entry to a log file within that directory to indicate a transfer is complete (this really only works if you have a single peer updating the directory, to avoid concurrency issues)
parsing the file to determine completeness. e.g. does the file start with a length field, or is it obviously incomplete ? e.g. parsing an incomplete XML file will result in a parse error due to the lack of an end element. Depending on your file's size and format, this can be trivial, or can be very time-consuming.
lsof would possibly be an option, although you've identified your Linux portability issue. If you use this, note the -F option, which formats the output suitable for processing by other programs, rather than being human-readable.
EDIT: Pax identified a fourth (!) method I'd forgotten - using the fact that the timestamp of the file hasn't updated in some time.
There is a fifth method. You can also check if the FTP Session is still active. This will work if every peer has it's own ftp user account. As long as the user is not logged off from FTP, assume the files are not complete.

Resources