NixOS - qBittorrent does not see files of torrents on external HDD after reboot - systemd

currently my qBittorrent setup is like this
{ config, pkgs, ... }:
{
systemd = {
packages = [ pkgs.qbittorrent-nox ];
services."qbittorrent-nox#shalva" = {
enable = true;
serviceConfig = {
Type = "simple";
User = "shalva";
ExecStart = "${pkgs.qbittorrent-nox}/bin/qbittorrent-nox";
};
wantedBy = [ "multi-user.target" ];
};
};
networking.firewall.allowedTCPPorts = [ 8080 ];
}
code that mounts my HDD, inside hardware-configuration.nix
fileSystems."/run/media/shalva/MyHDD" =
{
device = "/dev/disk/by-label/MyHDD";
fsType = "ext4";
options = [ "nofail" ];
};
When I reboot server and open the web app, it shows errors for all torrents, saying files are missing. Maybe something wrong is with timings of mounting the hard drive? A simple workaround I found it to restart qBittorrent service and then all torrents are fine

Related

terraform windows server 2016 in Azure and domain join issue logging in to domain with network level authentication error message

I successfully got a windows server 2016 to come up and join the domain. However, when I go to remote desktop login it throws an error about network level authentication. Something about domain controller cannot be contacted to perform Network Level Authentication (NLA).
I saw some video on work arounds at https://www.bing.com/videos/search?q=requires+network+level+authentication+error&docid=608000415751557665&mid=8CE580438CBAEAC747AC8CE580438CBAEAC747AC&view=detail&FORM=VIRE.
Is there a way to address this with terraform and up front instead?
To join domain I am using:
name = "domjoin"
virtual_machine_id = azurerm_windows_virtual_machine.vm_windows_vm.id
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
settings = <<SETTINGS
{
"Name": "mydomain.com",
"User": "mydomain.com\\myuser",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
depends_on = [ azurerm_windows_virtual_machine.vm_windows_vm ]
Is there an option I should add in this domjoin code perhaps?
I can log in with my local admin account just fine. I see the server is connected to the domain. A nslookup on the domain shows an ip address that was configured to be reachable by firewall rules, so it can reach the domain controller.
Seems like there might be some settings that could help out, see here, possibly all that is needed might be:
"EnableCredSspSupport": "true" inside your domjoin settings block.
You might also need to do something with the registry on the server side, which can be done by using remote-exec.
For example something like:
resource "azurerm_windows_virtual_machine" "example" {
name = "example-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_DS1_v2"
storage_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
storage_os_disk {
name = "example-os-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "example-vm"
admin_username = "adminuser"
admin_password = "SuperSecurePassword1234!"
}
os_profile_windows_config {
provision_vm_agent = true
}
provisioner "remote-exec" {
inline = [
"echo Updating Windows...",
"powershell.exe -Command \"& {Get-WindowsUpdate -Install}\"",
"echo Done updating Windows."
]
connection {
type = "winrm"
user = "adminuser"
password = "SuperSecurePassword1234!"
timeout = "30m"
}
}
}
In order to set the correct keys in the registry you might need something like this inside the remote-exec block (I have not validated this code) :
Set-ItemProperty -Path 'HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\Winstations\RDP-tcp' -Name 'SecurityLayer' -Value 0
In order to make the Terraform config cleaner I would recommend using templates for the Powershell script, see here
Hope this helps

Terraform timeout issue on local system

So I'm new to Terraform and trying to learn it, with some great difficultly. I run into a timeout issue on running certain things into Kubernetes that is hosted locally.
Setup
Running on Windows 10
Running Docker for Windows with Kubernetes cluster enabled
Running WSL2 Ununtu 20 on Windows
Installed Terraform and able to access kubectl affecting the cluster
Coding
I'm following the coding example set up from this website:
https://nickjanetakis.com/blog/configuring-a-kind-cluster-with-nginx-ingress-using-terraform-and-helm
But with a modification, in that in the demo.sh I'm running the steps manually and the kubectl file they are reference, I've turned that into a Terraform file as this is how I would do deployments in the future. Also I've had to comment out the provisioner "local-exec" as the kubectl commands outright fails in Terraform.
Code file
resource "kubernetes_pod" "foo-app" {
depends_on = [helm_release.ingress_nginx]
metadata {
name = "foo-app"
namespace = var.ingress_nginx_namespace
labels = {
app = "foo"
}
}
spec {
container {
name = "foo-app"
image = "hashicorp/http-eco:0.2.3"
args = ["-text=foo"]
}
}
}
resource "kubernetes_pod" "bar-app" {
depends_on = [helm_release.ingress_nginx]
metadata {
name = "bar-app"
namespace = var.ingress_nginx_namespace
labels = {
app = "bar"
}
}
spec {
container {
name = "bar-app"
image = "hashicorp/http-eco:0.2.3"
args = ["-text=bar"]
}
}
}
resource "kubernetes_service" "foo-service" {
depends_on = [kubernetes_pod.foo-app]
metadata {
name = "foo-service"
namespace = var.ingress_nginx_namespace
}
spec {
selector = {
app = "foo"
}
port {
port = 5678
}
}
}
resource "kubernetes_service" "bar-service" {
depends_on = [kubernetes_pod.bar-app]
metadata {
name = "bar-service"
namespace = var.ingress_nginx_namespace
}
spec {
selector = {
app = "bar"
}
port {
port = 5678
}
}
}
resource "kubernetes_ingress" "example-ingress" {
depends_on = [kubernetes_service.foo-service, kubernetes_service.bar-service]
metadata {
name = "example-ingress"
namespace = var.ingress_nginx_namespace
}
spec {
rule {
host = "172.21.220.84"
http {
path {
path = "/foo"
backend {
service_name = "foo-service"
service_port = 5678
}
}
path {
path = "/var"
backend {
service_name = "bar-service"
service_port = 5678
}
}
}
}
}
}
The problem
I run into 2 problems, first the pods cannot fine the name space, through it has been built, kubectl shows it as a valid namespace as well.
But the main problem I have is time outs. This happens on different elements all together, depending on the example. In trying to deploy the pods to the local cluster I get a 5 minute time out with this as the output after 5 minutes:
╷
│ Error: context deadline exceeded
│
│ with kubernetes_pod.foo-app,
│ on services.tf line 1, in resource "kubernetes_pod" "foo-app":
│ 1: resource "kubernetes_pod" "foo-app" {
│
╵
╷
│ Error: context deadline exceeded
│
│ with kubernetes_pod.bar-app,
│ on services.tf line 18, in resource "kubernetes_pod" "bar-app":
│ 18: resource "kubernetes_pod" "bar-app" {
This will happen for several kinds of things, I have this problem with pods, deployments, and ingresses. This is very frustrating and I would like to know is there a particular setting I'm needing to do or am I doing something wrong with my set up?
Thanks!
Edit #1:
So I repeated this on an Ubuntu VM with MiniKube install getting the same behavior. I copied the scripts, got Terraformed installed, Minikube installed confirmed its all up and running, yet I'm getting the same behavior on there as well. I'm wondering if this is an issue with Kubernetes and Terraform?

How can I run a shell script on multiple VMWare vm's created by terraform module?

I am using this module to spin up multiple vm's on my vmware cluster, https://registry.terraform.io/modules/Terraform-VMWare-Modules/vm/vsphere/1.6.0, and I want to run a shell script on all of the vms after using a null resource. With what i currently have, it complains that the host was not given a string, which makes sense. Here is my null resource:
# main.tf
module "jenkins-linuxvm-centos7" {
source = "Terraform-VMWare-Modules/vm/vsphere"
...
}
resource "null_resource" "vm" {
triggers = {
vm_ips = join(",", module.jenkins-linuxvm-centos7.Linux-ip)
}
# export TF_VAR_root_password=<pass>
connection {
type = "ssh"
host = module.jenkins-linuxvm-centos7.Linux-ip
user = "root"
password = var.vm_root_password
port = "22"
agent = false
}
provisioner "file" {
source = "resize_disk.sh"
destination = "/tmp/resize_disk.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/resize_disk.sh",
"/tmp/resize_disk.sh"
]
}
}
Do I need to use a dynamic block somehow? Or how can I modify host = module.jenkins-linuxvm-centos7.Linux-ip to include all the hosts I want to run it on?
You have to run it in a For_Each loop... Below is an example code where i am looping against the sql_var map variable. you will have to do it against the output of IPs --> module.jenkins-linuxvm-centos7.Linux-ip... you will be able to reference the IP of each machine as something like each.value i guess. I dont know how your output looks like, so guessing. If you are new to loops, here is one nice tuto.
https://blog.boltops.com/2020/10/04/terraform-hcl-loops-with-count-and-for-each
resource "null_resource" "instance" {
for_each = var.sql_var
provisioner "local-exec" {
command = "echo ${each.key} >> hello.txt"
}
}

Icinga2 notification just once on state change

I have set up icinga2 to monitor a few services with different intervals, so one service might be checked every 10 seconds. If it gives a critical error I will receive a notification, but I will receive it every 10 seconds if the error persists, or until I acknowledge it. I just want to receive it once for each state change. Then maybe after a specified time again, but it is not that important.
Here is my config:
This is more or less the standard template.conf, but I have added the "interval=0s", because I read that it should prevent notifications from being sent multiple times.
template Notification "mail-service-notification" {
command = "mail-service-notification"
interval = 0s
states = [ OK, Critical ]
types = [ Problem, Acknowledgement, Recovery, Custom,
FlappingStart, FlappingEnd,
DowntimeStart, DowntimeEnd, DowntimeRemoved ]
vars += {
notification_logtosyslog = false
}
period = "24x7"
}
And here is the part of the notification.conf that includes the template:
object NotificationCommand "telegram-service-notification" {
import "plugin-notification-command"
command = [ SysconfDir + "/icinga2/scripts/telegram-service-notification.sh" ]
env = {
NOTIFICATIONTYPE = "$notification.type$"
SERVICEDESC = "$service.name$"
HOSTNAME = "$host.name$"
HOSTALIAS = "$host.display_name$"
HOSTADDRESS = "$address$"
SERVICESTATE = "$service.state$"
LONGDATETIME = "$icinga.long_date_time$"
SERVICEOUTPUT = "$service.output$"
NOTIFICATIONAUTHORNAME = "$notification.author$"
NOTIFICATIONCOMMENT = "$notification.comment$"
HOSTDISPLAYNAME = "$host.display_name$"
SERVICEDISPLAYNAME = "$service.display_name$"
TELEGRAM_BOT_TOKEN = TelegramBotToken
TELEGRAM_CHAT_ID = "$user.vars.telegram_chat_id$"
}
}
apply Notification "telegram-icingaadmin" to Service {
import "mail-service-notification"
command = "telegram-service-notification"
user_groups = [ "icingaadmins" ]
assign where host.name
}
I think you had a typo.
It should work if you set interval = 0 (not "interval = 0s")
After that change you must restart the icinga service.

node-config doesn't read config files when app is set up as Windows Service using node-windows

I'm using node-windows to set up my application to run as a Windows Service. I am using node-config to manage configuration settings. Of course, everything is working fine when I run my application manually using node app.js command. When I install it as a service and it starts, the configuration settings are empty. I have production.json file in ./config folder, and I can set NODE_ENV to production in the install script. I can confirm that the variable is set correctly and still nothing. log.info('CONFIG_DIR: ' + config.util.getEnv('CONFIG_DIR')); produces undefined even if I explicitly set it in env value for the service. Looking for any insight.
install script:
var Service = require('node-windows').Service;
var path = require('path');
// Create a new service object
var svc = new Service({
name:'Excel Data Import',
description: 'Excel Data Import Service.',
script: path.join(__dirname, "app.js"), // path application file
env:[
{name:"NODE_ENV", value:"production"},
{name:"CONFIG_DIR", value: "./config"},
{name:"$NODE_CONFIG_DIR", value: "./config"}
]
});
// Listen for the "install" event, which indicates the
// process is available as a service.
svc.on('install',function(){
svc.start();
});
svc.install();
app script:
var config = require('config');
var path = require('path');
var EventLogger = require('node-windows').EventLogger;
var log = new EventLogger('Excel Data Import');
init();
function init() {
log.info("init");
if(config.has("File.fileFolder")){
var pathConfig = config.get("File.fileFolder");
log.info(pathConfig);
var DirectoryWatcher = require('directory-watcher');
DirectoryWatcher.create(pathConfig, function (err, watcher) {
//...
});
}else{
log.info("config doesn't have File.fileFolder");
}
}
I know this response is very late, but also i had the same problem, and here is how i solved it :
var svc = new Service({
name:'ProcessName',
description: 'Process Description',
script: require('path').join(__dirname,'bin\\www'),
env:[
{name: "NODE_ENV", value: "development"},
{name: "PORT", value: PORT},
{name: "NODE_CONFIG_DIR", value: "c:\\route-to-your-proyect\\config"}
]
});
When you are using windows, prefixing your enviroment variables with $ , is not required.
Also, when your run script isn´t on the same dir as your config dir, you have to provide a full path to your config dir.
When you have errors with node-windows , is also helpful dig into the error log. It is located on rundirectory/daemon/processname.err.log
I hope this will help somebody.

Resources