I have some n number of files in a server directory. Is there a way to write a bash script that will automate whether the data is getting updated in json file when i access through the portal and if i make any changes manually then it needs to get updated in json file. Is there a way to do this?
Yes, you can manipulate json with jq and bash script around.
Related
Hi I'm using nifi as an ETL tool.
Process IMG
This is my current process. I use TailFile to detect CSV file and then send messages to Kafka.
It works fine so far, but i want to delete CSV file after i send contents of csv to Kafka.
Is there any way?
Thanks
This depends on why you are using TailFile. From the docs,
"Tails" a file, or a list of files, ingesting data from the file as it is written to the file
TailFile is used to get new lines that are added to the same file, as they are written. If you need to a tail a file that is being written to, what condition determines it is no longer being written to?
However, if you are just consuming complete files from the local file system, then you could use GetFile which gives the option to delete the file after it is consumed.
From a remote file system, you could use ListSFTP and FetchSFTP which has a Completion Strategy to move or delete.
I would like to know if there is a way to read a file on a remote server using url? For example i have been shared a url to a file on a remote server, i need to read it using bash commands or any tool to retrieve data (eg: view first 50 rows and write to a file) without downloading the original file to local system.
The use case is to avoid downloading/uploading huge files located on the remote server to local systems instead access the file content using the url.
Any resources on this would help.
I'm trying to experiment with using scripts in the config/scripts directory. The Elasticsearch docs here say this:
Save the contents of the script as a file called config/scripts/my_script.groovy on every data node in the cluster:
This seems like it's probably really easy, but I'm afraid I don't understand how exactly to put a groovy file "on every data node in the cluster". Would this normally be done through the command line somehow, or can it be done by manually moving the groovy file (in Finder on OSX for example)? I have a test index, but when I look at the file structure on the nodes I'm confused where to put the groovy file. Help, pretty please.
You just need to copy the file to each server running elasticsearch. If you're just running elasticsearch on your computer then go to the folder you've installed elasticsearch into and add copy the file into config/scripts in there (you may have to create the folder first). Doesn't matter how the file gets there.
You should see an entry in the logs (or the console if you are running in the foreground) along the lines of
compiling script file [/path/to/elasticsearch/config/scripts/my_script.groovy
This won't show up straightaway - by default elasticsearch checks for new/updated scripts every 60 seconds (you can change this with the watcher.interval setting)
Since file scripts are deprecated (elastic/elasticsearch#24552 & elastic/elasticsearch#24555) this aproach is not going to work anymore.
API it's the only way.
I have a scenario where I need to replace certain Strings in an attribute file within a cookbook with user input from within a Bash script.
In the current puppet setup this is done simply by using sed on the module files, since the modules are stored in the file structure as files and folders.
How can I replicate this in the Chef eco-system? Is there a known shortcut?
Or would I have to download the cookbook as a file using knife, modify the content and then re-upload again to make the changes?
Not sure this is the best approach. You can definitely use knife download, sed, and knife upload as you mentioned but a better way would be to make it data driven. Either store the values in a data bag or role, and manipulate those either using knife or another API client. Then in your recipe code you can read out the values and use them.
I have several Azure Shared Access Signatures I need to access, list the blobs, and then export the contents. I am hoping I could just write a simple ruby script and run it on Mac.
Can someone share sample code? It's basically 'GET" to a URL with a signature, which I think from the command terminal I could use curl, but wasn't sure how to do it using Ruby to make it easier to loop through and maybe extend it later.
I'd be open to a bash script as well. Thanks.
A quick check on google returned https://github.com/johnnyhalife/waz-storage. Can you check this gem.