Issue manually running playbook for an incident in Azure Sentinel - sentinel

I have created a custom playbook that I want to test and run manually for an incident. When I do it, I get the following error : The gateway did not receive a response from 'Microsoft.SecurityInsights' within the specified time period.
Any ideas ?
Thanks !

Related

How to run a K6 script locally and send data to remote InfluxDB instance (No Docker)

I'm extremely new at k6 + influxdb + grafana, and I was given a task related to execute certain K6 Scripts locally but save/pass the data over a remote InfluxDB instance.
As of now I'm having issues given that I'm not sure what I'm missing regarding the needed configurations in order to do this since everytime I try to run the script pointing at the InfluxDB instance I'm just getting an error everytime I run it:
The command that I'm executing is:
k6 run --out influxdb="https://my_influxdb_url/write" //sampleScript.js
But the original URL that was handed over to me was something like this:
https://my_influxdb_url/write?db=DB_NAME&u=USERNAME&p=PASSWORD
And when I execute the first mentioned script I'm getting the following error:
ERRO[000X] Couldn't write stats error="404 page not found\n" output=InfluxDB1
So I've tried creating K6_INFLUXDB_USERNAME and K6_INFLUXDB_PASSWORD as environment variables but I'm still getting the same error.
I'm not sure if I might be missing some .yaml file like a datasource in which I should fill those 3 values? (DB_NAME, USERNAME, PASSWORD)
Or maybe I'm just doing it all wrong and not calling the execution command properly for this scenario.
Another weird thing that I noticed is that OUTPUT is throwing InfluxDB1 instead of my actual InfluxDB url which I guess might be where my issue lies.
Any kind of tip would be greatly appreciated since the actual documentation that I've found so far is always run either on a Docker container instance of Grafana+InfluxDB or simply running it locally which is not my case :(
Thanks a lot in advance as always!!

aws status check failed alarm is not taking the action

I have one instance which always gives the headache of failing system status check which I had to reboot the instance in order to get it running again.
I see that there's an option to create status check alarm which I did
I did receive the notification through email + sns as I have set but the instance did not get rebooted that I have to go into ec2 dashboard to reboot manually
Any settings I am not setting correctly or if anyone has other ideas how I can reboot the instance automatically if status check failed?
Thanks in advance for any suggestions.
You can only use the recover action on a system status check failure and reboot only works on instance status checks failures. They are distinctly different failures with different causes.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html
I would setup separate alarms for each condition. One for instance status check failure and system level failure, reboot and recover respectively.

(Error starting container: API error (500) Hyperledger

I am using bluemix network to deploy and test my custom chaincode( link to the chaincode). I'm using hte Swagger API to deploy, invoke and query my chaincode. The deploy and invoke work fine but when I try to query my chaincode, I keep getting the following error
Following is the validating peer logs :
Is it some problem with my query code or network issue. Any help is appreciated.
The error likely happened during the deploy phase (the logs just shows the query). The "deploy" being an asynchronous transaction returning an ID (just "submits" the transaction to be processed later) cannot indicate if the actual execution of the transaction will be successful or not. But the "query" request is synchronous and shows a failure.
Looking at the chaincode, the error is almost certainly due to the import and use of "github.com/op/go-logging" package. As the fabric only copies the chaincode and does not pick up its dependencies, that package is not available at deploy time.
Note that the same code will work when under "github.com/hyperledger/fabric" path as "github.com/op/go-logging" is available as a "vendor" package in that path.
To test this, try commenting out the import statement and all logging from the code (make sure "go build" works locally first with the changes).

Run a phantom.js file on Heroku regularly

I've uploaded a single file to Heroku that crawls a website and responds the edited content in JSON format on an http request. I now want to update the content regularly so the content stays up to date. I tried using the Heroku Scheduler however I am failing to schedule the process so that it runs correctly.
I have specified the following process in the Heroku Scheduler:
run phantomjs phantom.js //Using 1X Dyno, every hour.
//phantom.js is the file that contains my source code and that runs the server.
However if I enter
heroku ps
into the terminal, I only see one web dyne running and no scheduler task. Also if I type
heroku logs --ps scheduler.1
as described in the Scheduler documentation, there is no output.
What am I doing wrong? Any help would be appreciated.
For what it sounds like you want to accomplish, you need to be constantly running
1 Web Dyno
1 Background Worker
When your scheduled task executes, it will be run by the background worker. Which, since you haven't provisioned it, isn't executing.
Found it: I only had to write
phantomjs phantom.js
in order to get it working. It was the "run" that made the expression invalid

Queued Build is not connecting to db as it uses domainName\computerName instead of domanName\username

I am trying to queue a build in my own build definition. But the sql connection in my code throws an exception that Login failed for user 'domainName\computerName$' which is natural since it should have used domainName\userAlias.
My question is why is it using domainName\computerName, and how to make it use windows auth instead? Can some one please help me with this?
You need to set the service account that the build service uses on the server(s) running your Build Agent(s). It sounds like it's currently set to run as Network Service.
You can change it by firing up TFS Admin Console, and going to Build Configuration and changing the properties on the service:

Resources