Powershell Script taking too long - powershell-4.0

I have a powershell script which involves creating multiple directories and command used to create directory is New-Item -ItemType Directory -Force -Path "abc\WEB-INF\classes\".
Log for creating directory path shows that it took 50 seconds(15:36:42 , 15:37:32) to create 2 directories(WEB-INF,classes).This behaviour is completely abnormal.Is there any other way i can create directories to save time.
2016-02-08 15:36:42 [stdout]PSPath : Microsoft.PowerShell.Core\FileSystem::C:\abc
2016-02-08 15:36:44 [stdout] \WEB-INF\classes\
2016-02-08 15:36:46 [stdout]PSParentPath : Microsoft.PowerShell.Core\FileSystem::C:\abc
2016-02-08 15:36:49 [stdout] ment\WEB-INF
2016-02-08 15:36:51 [stdout]PSChildName : classes
2016-02-08 15:36:53 [stdout]PSDrive : C
2016-02-08 15:36:55 [stdout]PSProvider : Microsoft.PowerShell.Core\FileSystem
2016-02-08 15:36:58 [stdout]PSIsContainer : True
2016-02-08 15:37:00 [stdout]Name : classes
2016-02-08 15:37:02 [stdout]Parent : WEB-INF
2016-02-08 15:37:05 [stdout]Exists : True
2016-02-08 15:37:07 [stdout]Root : C:\
2016-02-08 15:37:09 [stdout]FullName : C:\abc\WEB-INF\classes
2016-02-08 15:37:11 [stdout]Extension :
2016-02-08 15:37:14 [stdout]CreationTime : 08/02/2016 12:59:08
2016-02-08 15:37:16 [stdout]CreationTimeUtc : 08/02/2016 12:59:08
2016-02-08 15:37:18 [stdout]LastAccessTime : 08/02/2016 13:00:12
2016-02-08 15:37:21 [stdout]LastAccessTimeUtc : 08/02/2016 13:00:12
2016-02-08 15:37:23 [stdout]LastWriteTime : 08/02/2016 13:00:12
2016-02-08 15:37:25 [stdout]LastWriteTimeUtc : 08/02/2016 13:00:12
2016-02-08 15:37:27 [stdout]Attributes : Directory
2016-02-08 15:37:30 [stdout]BaseName : classes
2016-02-08 15:37:32 [stdout]Mode : d-----

Got the bug . It was not actually the command which was running slow. I was using
Start-Transcript -Path "abc\def\ghi.log" -Force -Append
which doesn't check whether directory (abc\def\ghi) exists . It gives below log but commands run after will be run slowly till Stop-Transcript
Transcript started, output file is abc\def\ghi.log

Related

TimeoutException when trying to connect to Azure service bus queue through informatica

I am trying to load 2k records on azure service bus queue through informatica, but getting timeout exception. The connection is working file for 700 records, successfully loading on queue.
I have created JMS and JNDI connection and is working fine if number of records are less.
Error:
2020-05-04 23:27:28 : ERROR : (3084 | WRITER_1__1) : (IS | PC_INT_EE_QA) : node01_lxinfaeeqa1 : JAVA PLUGIN_1762 : [ERROR] JMS writer encountered a JMS exception: Timed out while waiting to get credit to sendException Stack: javax.jms.JMSException: Timed out while waiting to get credit to send
at org.apache.qpid.amqp_1_0.jms.impl.MessageProducerImpl.send(MessageProducerImpl.java:331)
at com.informatica.powerconnect.jms.server.writer.JMSMessageWriter$QueueWriter.writeMessage(JMSMessageWriter.java:93)
at com.informatica.powerconnect.jms.server.writer.JMSWriterPartitionDriver.execute(JMSWriterPartitionDriver.java:401)
Linked Exception Stack: java.util.concurrent.TimeoutException
at org.apache.qpid.amqp_1_0.transport.ConnectionEndpoint.waitUntil(ConnectionEndpoint.java:1232)
at org.apache.qpid.amqp_1_0.transport.SessionEndpoint.waitUntil(SessionEndpoint.java:686)
at org.apache.qpid.amqp_1_0.transport.LinkEndpoint.waitUntil(LinkEndpoint.java:360)
at org.apache.qpid.amqp_1_0.client.Sender.send(Sender.java:320)
at org.apache.qpid.amqp_1_0.jms.impl.MessageProducerImpl.send(MessageProducerImpl.java:321)
at com.informatica.powerconnect.jms.server.writer.JMSMessageWriter$QueueWriter.writeMessage(JMSMessageWriter.java:93)
at com.informatica.powerconnect.jms.server.writer.JMSWriterPartitionDriver.execute(JMSWriterPartitionDriver.java:401)
.
2020-05-04 23:27:28 : ERROR : (3084 | WRITER_1__1) : (IS | PC_INT_EE_QA) : node01_lxinfaeeqa1 : JAVA PLUGIN_1762 : [ERROR] at com.informatica.powerconnect.jms.server.writer.JMSWriterPartitionDriver.execute(JMSWriterPartitionDriver.java:431)
2020-05-04 23:27:28 : ERROR : (3084 | WRITER_1__1) : (IS | PC_INT_EE_QA) : node01_lxinfaeeqa1 : SDKS_38502 : Plug-in #300800's target [Target_jms: Partition 1] failed in method [execute].
2020-05-04 23:27:28 : INFO : (3084 | WRITER_1__1) : (IS | PC_INT_EE_QA) : node01_lxinfaeeqa1 : WRT_8333 : Rolling back all the targets due to fatal session error.
2020-05-04 23:28:28 : INFO : (3084 | WRITER_1__1) : (IS | PC_INT_EE_QA) : node01_lxinfaeeqa1 : WRT_8325 : Final rollback executed for the target [Target_jms] at end of load
2020-05-04 23:28:28 : ERROR : (3084 | WRITER_1__1) : (IS | PC_INT_EE_QA) : node01_lxinfaeeqa1 : WRT_8081 : Writer run terminated. [Error in loading data to target table [Target_jms: Partition 1]]
2020-05-04 23:28:28 : INFO : (3084 | WRITER_1__1) : (IS | PC_INT_EE_QA) : node01_lxinfaeeqa1 : WRT_8168 : End loading table [Target_jms: Partition 1] at: Mon May 04 13:58:28 2020
2020-05-04 23:28:28 : INFO : (3084 | WRITER_1__1) : (IS | PC_INT_EE_QA) : node01_lxinfaeeqa1 : WRT_8035 : Load complete time: Mon May 04 13:58:28 2020
Appreciate the help.
The queue size for azure was 1GB but it seems that Azure service bus queue can only accept 100 messages in one transaction. I was able to solve the issue by changing the properties at informatica session level.
Commit type to "Target"
commit interval to "100"
And in target properties kept the JMS priority to 9.

How to update message of dataflows in Nifi Registry?

In the progress of upgrading Nifi-1.10.0 from Nifi-1.9.2,'flow.xml.gz' is copied to new app.However,the files storaged in registry are unchanaged.
as following message:
$ cat 4.snapshot | grep 'version'
"version" : "1.9.2"
"version" : "1.9.2"
"version" : "1.9.2"
"version" : "1.9.2"
"version" : "1.9.2"
"version" : "1.9.2"
"version" : "1.9.2"
"version" : "1.9.2"
How to update them to show 'version:1.10.0'?there is a way to commit group in Nifi-1.10.0 but TOO TROUBLE!ANY OTHER SUGGESTION?Thank you!
The version fields in registry will get updated the next time you save a new version of those flows. It doesn't cause any issues for the versions to remain as 1.9.2, those flows can still be imported and used in 1.10.0.

Getting error while using TeamCity command line tool. What is missing?

I'm trying to use TeamCity command line tool but receiving an error? What is missing?
The command executed:
java -jar tcc.jar info --host http://tc
The output:
[Connecting to "http://tc" TeamCity Server] started
[Connecting to "http://tc" TeamCity Server] done
[Logging in] started
[Logging in] done
[ ] info: error
com.thoughtworks.xstream.converters.ConversionException: No enum constant jetbrains.buildServer.BuildTypeDescriptor.CheckoutType.AUTO : No enum constant jetbrains.buildServer.BuildTypeDescriptor.CheckoutType.AUTO
---- Debugging information ----
message : No enum constant jetbrains.buildServer.BuildTypeDescriptor.CheckoutType.AUTO
cause-exception : java.lang.IllegalArgumentException
cause-message : No enum constant jetbrains.buildServer.BuildTypeDescriptor.CheckoutType.AUTO
class : jetbrains.buildServer.BuildTypeDescriptor$CheckoutType
required-type : jetbrains.buildServer.BuildTypeDescriptor$CheckoutType
converter-type : com.thoughtworks.xstream.converters.enums.EnumConverter
path : /Project/configs/Configuration/checkoutType
line number : 1
class[1] : jetbrains.buildServer.BuildTypeData
converter-type[1] : com.thoughtworks.xstream.converters.reflection.ReflectionConverter
class[2] : java.util.ArrayList
converter-type[2] : com.thoughtworks.xstream.converters.collections.CollectionConverter
class[3] : jetbrains.buildServer.ProjectData
version : null
-------------------------------

Fluentd: Could not push logs to Elasticsearch

I have deployed a elasticsearch 2.2.0 Now, I'm sending logs to it using a td-agent 2.3.0-0.
The final on tag chain was..
<match extra.geoip.processed5.**>
type copy
<store>
type file
path /var/log/td-agent/sp_l5
time_slice_format %Y%m%d
time_slice_wait 10m
time_format %Y%m%dT%H%M%S%z
compress gzip
utc
</store>
<store>
type elasticsearch
host 11.0.0.174
port 9200
logstash_format true
logstash_prefix logstash_business
logstash_dateformat %Y.%m
flush_interval 5s
</store>
</match>
Now, I added a time out within the elasticsearch type block
request_timeout 45s
This is the td-agent.log with debug enabled.
2016-02-08 15:58:07 +0100 [info]: plugin/in_syslog.rb:176:listen: listening syslog socket on 0.0.0.0:5514 with udp
2016-02-08 15:59:03 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 15:59:03 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:03:33 +0100 [warn]: plugin/out_elasticsearch.rb:200:rescue in send: Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-02-08 16:03:33 +0100 [warn]: plugin/out_elasticsearch.rb:200:rescue in send: Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-02-08 16:03:35 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:03:35 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:08:05 +0100 [warn]: plugin/out_elasticsearch.rb:200:rescue in send: Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-02-08 16:08:05 +0100 [warn]: plugin/out_elasticsearch.rb:200:rescue in send: Could not push logs to Elasticsearch, resetting connection and trying again. read timeout reached
2016-02-08 16:08:09 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:08:09 +0100 [info]: plugin/out_elasticsearch.rb:77:client: Connection opened to Elasticsearch cluster => {:host=>"11.0.0.174", :port=>9200, :scheme=>"http"}
2016-02-08 16:12:40 +0100 [warn]: fluent/output.rb:354:rescue in try_flush: temporarily failed to flush the buffer. next_retry=2016-02-08 15:59:04 +0100 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Could not push logs to Elasticsearch after 2 retries. read timeout reached" plugin_id="object:fd9738"
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:204:in `rescue in send'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:194:in `send'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:188:in `write'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/buffer.rb:345:in `write_chunk'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/buffer.rb:324:in `pop'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/output.rb:321:in `try_flush'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/output.rb:140:in `run'
2016-02-08 16:12:40 +0100 [warn]: fluent/output.rb:354:rescue in try_flush: temporarily failed to flush the buffer. next_retry=2016-02-08 15:59:04 +0100 error_class="Fluent::ElasticsearchOutput::ConnectionFailure" error="Could not push logs to Elasticsearch after 2 retries. read timeout reached" plugin_id="object:1034980"
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:204:in `rescue in send'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:194:in `send'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-elasticsearch-1.3.0/lib/fluent/plugin/out_elasticsearch.rb:188:in `write'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/buffer.rb:345:in `write_chunk'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/buffer.rb:324:in `pop'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/output.rb:321:in `try_flush'
2016-02-08 16:12:40 +0100 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.19/lib/fluent/output.rb:140:in `run'
I'm running this on AWS over ubuntu 14.04 c3.large. I have tested the access from td-agent machine creating a index, adding documents and deleting the index using curl without any problem. (To be sure, I opened all communications in my Security Groups)
Another test from td-agent machine...
root#bilbo:~# telnet 11.0.0.174 9200
Trying 11.0.0.174...
Connected to 11.0.0.174.
Escape character is '^]'.
GET / HTTP/1.0
HTTP/1.0 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 320
{
"name" : "gandalf-gandalf",
"cluster_name" : "aaaa_dev",
"version" : {
"number" : "2.2.0",
"build_hash" : "8ff36d139e16f8720f2947ef62c8167a888992fe",
"build_timestamp" : "2016-01-27T13:32:39Z",
"build_snapshot" : false,
"lucene_version" : "5.4.1"
},
"tagline" : "You Know, for Search"
}
Connection closed by foreign host.
root#bilbo:~#
With strace I can see this
[pid 10774] connect(23, {sa_family=AF_INET, sin_port=htons(9200), sin_addr=inet_addr("11.0.0.174")}, 16) = -1 EINPROGRESS (Operation now in progress)
[pid 10774] clock_gettime(CLOCK_MONOTONIC, {4876, 531526283}) = 0
[pid 10774] select(24, NULL, [23], NULL, {45, 0}) = 1 (out [23], left {44, 999925})
[pid 10774] fcntl(23, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
[pid 10774] connect(23, {sa_family=AF_INET, sin_port=htons(9200), sin_addr=inet_addr("11.0.0.174")}, 16) = 0
[pid 10774] fcntl(23, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
[pid 10774] write(23, "POST /_bulk HTTP/1.1\r\nUser-Agent: Faraday v0.9.2\r\nHost: 11.0.0.174:9200\r\nContent-Length: 4798\r\n\r\n", 97) = 97
[pid 10774] fcntl(23, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
[pid 10774] write(23, "{\"index\":{\"_index\":\"logstash_apache-2016.02.08\",\"_type\":\"fluentd\"}}\n{\"message\":\"Feb 8 16:18:47 bilbo ::apache::PRE::access: - 10.0.0.15 - control [08/Feb/2016:16:18:47 +0100] \\\"GET /status/memcached.php HTTP/1.1\\\" 200 1785 \\\"-\\\" \\\"check_http/v2.0 (monitoring-plugins 2.0)\\\" control-pre.fluzo.com:443 0\",\"n\":\"bilbo\",\"s\":\"info\",\"f\":\"local3\",\"t\":\"Feb 8 16:18:47\",\"h\":\"bilbo\",\"a\":\"apache\",\"e\":\"PRE\",\"o\":\"access\",\"ip\":\"-\",\"ip2\":\"10.0.0.15\",\"rl\":\"-\",\"ru\":\"control\",\"rt\":\"[08/Feb/2016:16:18:47 +0100]\",\"met\":\"GET\",\"pqf\":\"status/memcached.php\",\"hv\":\"HTTP/1.1\",\"st\":\"200\",\"bs\":\"1785\",\"ref\":\"-\",\"ua\":\"check_http/v2.0 (monitoring-plugins 2.0)\",\"vh\":\"control-pre.aaaa.com\",\"p\":\"443\",\"rpt\":\"0\",\"co\":null,\"ci\":null,\"la\":null,\"lo\":null,\"ar\":null,\"dm\":null,\"re\":null,\"#timestamp\":\"2016-02-08T16:19:47+01:00\"}\n{\"index\":{\"_index\":\"logstash_apache-2016.02.08\",\"_type\":\"fluentd\"}}\n{\"message\":\"Feb 8 16:18:47 bilbo ::apache::PRE::access: - 10.0.0.15 - control [08/Feb/2016:16:18:47 +0100] \\\"GET /status/core.php HTTP/1.1\\\" 200 1898 \\\"-\\\" \\\"c"..., 4798) = 4798
Seems that the connection is open.
To avoid confusion, I have installed td-agent on elasticsearch machine with the same output.
This is my elasticsearch configuration...
### MANAGED BY PUPPET ###
---
bootstrap:
mlockall: true
cluster:
name: aaaa0_dev
discovery:
zen:
minimum_master_nodes: 1
ping:
multicast:
enabled: false
unicast:
hosts:
- 11.0.0.174
gateway:
expected_nodes: 1
recover_after_nodes: 1
recover_after_time: 5m
hostname: gandalf
http:
compression: true
index:
store:
compress:
stored: true
type: niofs
network:
bind_host: 11.0.0.174
publish_host: 11.0.0.174
node:
name: gandalf-gandalf
path:
data: /var/lib/elasticsearch-gandalf
logs: /var/log/elasticsearch/gandalf
transport:
tcp:
compress: true
Any idea?
Thanks.
UPDATE
Against a real elasticsearch cluster works (3 nodes)
This is the configuration.
### MANAGED BY PUPPET ###
---
bootstrap:
mlockall: true
cluster:
name: aaaa
discovery:
zen:
minimum_master_nodes: 2
ping:
multicast:
enabled: false
unicast:
hosts:
- el0
- el1
- el2
gateway:
expected_nodes: 3
recover_after_nodes: 2
recover_after_time: 5m
hostname: kili
http:
compression: true
index:
store:
compress:
stored: true
type: niofs
network:
bind_host: 11.0.0.253
publish_host: 11.0.0.253
node:
name: kili-kili
path:
data:
- /var/lib/elasticsearch0
- /var/lib/elasticsearch1
logs: /var/log/elasticsearch/kili
transport:
tcp:
compress: true
However, kibana did create .kibana index into elasticsearch with one node. Also, y test this node with curl.

print values at fixed position in Terminal window using UNIX

I am running a shell script which produces below output.
Built-By : apache
Created-By : Apache Maven
Implementation-Title : testApp
Implementation-Vendor-Id : com.test.app
Implementation-Version : testBox
Manifest-Version : 1.0
appname : TestStar
build-date : 02-03-2014-13 : 41
version : testBox
Expecting the below output: (Please ignore _ underscore)
Built-By_________________: apache
Created-By_______________: Apache Maven
Implementation-Title_____: testApp
Implementation-Vendor-Id_: com.test.app
Implementation-Version___: testBox
Manifest-Version_________: 1.0
appname__________________: TestStar
build-date_______________: 02-03-2014-13 : 41
version__________________: testBox
Someone Please help me. I am iterating two arrays to print these values.
show me your code.
maybe this would help:
#!/bin/bash
key=("appname" "version" "Created-By")
value=("TestStar" "testBox" "Apache Maven")
for i in "${!key[#]}";do
printf "%-15s %s\n" "${key[i]}" "${value[i]}"
done

Resources