Azure blob storage not storing locally - azure-blob-storage

I want to use azure-blob-storage container to store data in a local directory. I have used the upload_blob from another container for this purpose. The file is getting uploaded to the cloud but not getting stored in the local path. I have given binds, device to cloud upload properties and also changed the permissions for the directory with "chmod 777". After doing all this the file is not getting saved locally.
Python function:
with blob_service_client.get_blob_client(container=container_name, blob=local_file_name) as upload_client:
with open(upload_file_path, "rb") as data:
print("Uploading the file")
upload_client.upload_blob(data, blob_type="BlockBlob", overwrite=True)
print("Finished uploading")
Binds:
"HostConfig": {
"Binds": [
"/opt/localstorage/blob/:/blobroot"
]
Upload properties
"blobstorage": {
"properties.desired": {
"deviceToCloudUploadProperties": {
"uploadOn": true,
"uploadOrder": "NewestFirst",
"cloudStorageConnectionString": "xxxx"
"storageContainersForUpload": {
"bloboutput": {
"target": "bloboutput"
}
}
},
"deviceAutoDeleteProperties": {
"deleteOn": false,
"deleteAfterMinutes": 15
}
Logs:
[2021-04-20 04:23:57.857] [info ] [tid 1] Info: Successfully loaded {0}: {1}, p0="Nephos.MaskClientIPAddressesInLogs", p1="False"
[2021-04-20 04:23:57.857] [info ] [tid 1] Info: Loading config Param {0} ({1}) read: {2}, p0="NephosIncludeInternalDetailsInErrorResponses", p1="Include internal details in error responses", p2="true"
[2021-04-20 04:23:57.857] [info ] [tid 1] Info: Successfully loaded {0}: {1}, p0="NephosIncludeInternalDetailsInErrorResponses", p1="True"
[2021-04-20 04:23:57.857] [info ] [tid 1] Info: Loading config Param {0} ({1}) read: {2}, p0="StampName", p1="Stamp Name", p2="Default Stamp"
[2021-04-20 04:23:57.924] [info ] [tid 1] Microsoft.AzureStack.Services.Storage.EntryPoint.BlobService: BlobService - StartAsync completed
[2021-04-20 04:23:57.925] [info ] [tid 1] Microsoft.Azure.Devices.BlobStorage.Tiering.BlobTieringService: Starting service...
[2021-04-20 04:23:57.937] [info ] [tid 1] [BlobInterface.cc:1494] [ListBlobsInOrder] ListBlobsInOrder received. Container:bloboutput BlobNameStart:null MaxBlobNames:1 OrderType:1 Flags:1
[2021-04-20 04:23:57.937] [error ] [tid 1] [MetaStore.cc:1953] [ListBlobsInOrder] Container not found. Name:bloboutput

Since you mentioned storing files 'locally'. then you should use the method 'download_blob'
And this is the tutorials.

Related

Deploy Jhipster on clevercloud

I'm deploying a Jhipster application on Clevercloud.
I have set up some configuration:
war.json
{
"build": {
"type": "maven",
"goal": "package -Pprod -DskipTests"
},
"deploy": {
"goal": "package -Pprod -DskipTests",
"container": "TOMCAT8",
"war": [
{
"file": "target/myapp-1.0.0.war"
}
]
}
}
maven.json
{
"build": {
"type": "maven",
"goal": "package -Pprod -DskipTests"
},
"deploy": {
"goal": "package -Pprod -DskipTests"
}
}
I have modified the application-prod.yml to include the url/username/password of the db add-on.
When I deploy, the deployment is successfull but the application is not running.
On the application page I have 404 error.
The DB is correctly initialised.
In the logs I have the following messages that I don't understand or I'm not able to solve:
multiple times this message
2017-09-18T09:21:22.701Z: 09:21:21.483 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiTemplate - Looking up JNDI object with name [java:comp/env/logging.exception-conversion-word]
2017-09-18T09:21:22.702Z: 09:21:21.486 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiLocatorDelegate - Converted JNDI name [java:comp/env/logging.exception-conversion-word] not found - trying original name [logging.exception-conversion-word]. javax.naming.NameNotFoundException: Name [logging.exception-conversion-word] is not bound in this Context. Unable to find [logging.exception-conversion-word].
2017-09-18T09:21:22.702Z: 09:21:21.486 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiTemplate - Looking up JNDI object with name [logging.exception-conversion-word]
2017-09-18T09:21:22.702Z: 09:21:21.486 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiPropertySource - JNDI lookup for name [logging.exception-conversion-word] threw NamingException with message: Name [logging.exception-conversion-word] is not bound in this Context. Unable to find [logging.exception-conversion-word].. Returning null.
then:
2017-09-18T09:21:22.777Z: [09:21:12.705][debug][talledLocalContainer] Connection attempt with socket Socket[unconnected], current time is 1505726472705
2017-09-18T09:21:22.778Z: [09:21:12.705][debug][talledLocalContainer] Socket Socket[unconnected] for port 8009 closed
2017-09-18T09:21:22.778Z: [09:21:13.068][debug][talledLocalContainer] Executing '/usr/x86_64-pc-linux-gnu/lib/icedtea8/jre/bin/java' with arguments:
2017-09-18T09:21:22.778Z: '-version'
2017-09-18T09:21:22.778Z: The ' characters around the executable and arguments are
2017-09-18T09:21:22.778Z: not part of the command.
2017-09-18T09:21:22.779Z: [09:21:13.085][debug][talledLocalContainer] Output appended to /tmp/cargo-jvm-version-4176730048875251522.txt
2017-09-18T09:21:22.779Z: [09:21:13.085][debug][talledLocalContainer] Error appended to /tmp/cargo-jvm-version-4176730048875251522.txt
2017-09-18T09:21:22.779Z: [09:21:13.086][debug][talledLocalContainer] Project base dir set to: /home/bas/app_4b724c3b-6703-474e-9ec4-65d775cd0013
2017-09-18T09:21:22.779Z: [09:21:13.086][debug][talledLocalContainer] Execute:Java13CommandLauncher: Executing '/usr/x86_64-pc-linux-gnu/lib/icedtea8/jre/bin/java' with arguments:
2017-09-18T09:21:22.779Z: '-version'
2017-09-18T09:21:22.779Z: The ' characters around the executable and arguments are
2017-09-18T09:21:22.779Z: not part of the command.
And multiples times:
2017-09-18T09:21:22.793Z: [09:21:13.416][debug][URLDeployableMonitor] Checking URL [http://localhost:8080/cargocpc/index.html] for status using a timeout of [120000] ms...
2017-09-18T09:21:22.794Z: [09:21:13.452][debug][URLDeployableMonitor] URL [http://localhost:8080/cargocpc/index.html] is not responding: -1 java.net.ConnectException: Connection refused (Connection refused)
2017-09-18T09:21:22.794Z: [09:21:13.452][debug][URLDeployableMonitor] Notifying monitor listener [org.codehaus.cargo.container.spi.deployer.DeployerWatchdog#7bd4937b]
Ending with:
2017-09-18T09:21:32.710Z: 2017-09-18 09:21:28.724 INFO 2232 --- [ost-startStop-1] com.bbs.dm.config.WebConfigurer : Web application configuration, using profiles: prod
2017-09-18T09:21:32.711Z: 2017-09-18 09:21:28.735 INFO 2232 --- [ost-startStop-1] com.bbs.dm.config.WebConfigurer : Web application fully configured
2017-09-18T09:21:32.711Z: 2017-09-18 09:21:28.994 DEBUG 2232 --- [ost-startStop-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Starting Liquibase synchronously
2017-09-18T09:21:36.985Z: Nothing listening on 8080. Please update your configuration and redeploy
2017-09-18T09:21:52.730Z: 2017-09-18 09:21:47.492 DEBUG 2232 --- [ost-startStop-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Started Liquibase in 18498 ms
2017-09-18T09:21:57.985Z: Application start successful
2017-09-18T09:21:57.985Z: No cron to setup
2017-09-18T09:21:57.986Z: Created symlink /etc/systemd/system/multi-user.target.wants/zabbix-agentd.service → /usr/x86_64-pc-linux-gnu/lib/systemd/system/zabbix-agentd.service.
I have done nothing else except following Clevercloud documentation to deploy
Have I miss something in the configuration?
(For info, the application is deploying well on other platform like Heroku or Pivotal)
To deploy a jhipster application on Clevercloud.
Here is what worked for me.
I have followed the indications given to create an application and deploy it using the CLI
Configuration files:
clevercloud/war.json
{
"build": {
"type": "maven",
"goal": "package -Pprod -DskipTests"
},
"deploy": {
"jarName": "target/myapp-1.0.0.war"
}
}
clevercloud/maven.json
{
"build": {
"type": "maven",
"goal": "package -Pprod -DskipTests"
},
"deploy": {
"goal": "package -Pprod -DskipTests"
}
}
I modified my application-prod.yml to link the db.

how to use secure docker registry(by CA) for mesos container?

In DCOS, I want to deploy a mesos container with a self-defined image which stored in a local secure docker registry, and it has been secured by CA (not username and password!)
The json is
{
"id": "/gpu-tflinker",
"cmd": "while [ true ] ; do nvidia-smi; sleep 5; done",
"cpus": 0.1,
"mem": 1024,
"gpus": 1,
"instances": 1,
"constraints": [
[
"hostname",
"CLUSTER",
"10.140.0.22"
]
],
"container": {
"type": "MESOS",
"docker": {
"image": "tflinker:test-gpu",
"credential": null
}
}
}
The above json failed to run on marathon, and there is no content on mesos's stderr and stdout file, on mesos-agent log, the error message is :
E0721 05:01:57.726367 22498 slave.cpp:3976] Container 'e2c68720-0fb7-41bc-9d3b-a2b5e4793816' for executor 'gpu-t
flinker.b6f96725-6dd1-11e7-ba5d-0242b2c758c0' of framework 1079aaea-6dde-4dc1-8990-d926a895de78-0000 failed to s
tart: Unexpected HTTP response '401 Unauthorized' when trying to get the manifest
W0721 05:01:57.726478 22497 composing.cpp:541] Container 'e2c68720-0fb7-41bc-9d3b-a2b5e4793816' is already destr
oyed
I0721 05:01:57.726583 22497 slave.cpp:4082] Executor 'gpu-tflinker.b6f96725-6dd1-11e7-ba5d-0242b2c758c0' of fram
ework 1079aaea-6dde-4dc1-8990-d926a895de78-0000 has terminated with unknown status
I0721 05:01:57.726603 22497 slave.cpp:4193] Cleaning up executor 'gpu-tflinker.b6f96725-6dd1-11e7-ba5d-0242b2c75
8c0' of framework 1079aaea-6dde-4dc1-8990-d926a895de78-0000
I0721 05:01:57.726794 22497 slave.cpp:4281] Cleaning up framework 1079aaea-6dde-4dc1-8990-d926a895de78-0000
so it seems mesos failed to fetch the docker image. I've configed CA file for dockerd(move ca files to /etc/docker/certs.d/), so I can 'docker pull' the image to local machine, but I am not sure how to config CA file for mesos~
in mesos-agent configurations, there exist a item --docker_config=VALUE, but it seems this item can only be used for username/password secured registry, I don't know how to config for CA secured registry.
anybody can help me out?! thanks!
I think CA file is just for encryption. you will need username and password in ca file way I think.
In my way, I put auth file into the container to authorize my private registry.
I wrote a web service for downloading the auth file
xxx.tar.gz(format: .docker/config.json in the tar.gz)
in the config.json, {"auths": {"test.com:6999": {"auth": "(username:password) [base64 encode]"}}} like {"auths": {"test.com:6999": {"auth": "Y2NjOjEyMw=="}}}
use Mesos uris to download auth files prepared into the containers. then, it would authorized.
"uris": [
"http:your download url"
]

Apache Drill failed to connect to HDFS

This is my hdfs version:
NameNode '10.207.78.21:38234'
Started: Mon Feb 02 19:16:43 CST 2015
Version: 1.0.4, r1393290
This is the config of drill file system plugin:
{
"type": "file",
"enabled": true,
"connection": "hdfs://10.207.78.21:38234",
"config": null,
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null
}
},
"formats": ...
This is my test data:
bash-4.3$ ./hadoop fs -cat /test
{"key": "value"}
And drill failed executing a query in embedded mode:
0: jdbc:drill:zk=local> SELECT * FROM rpmp.`/test` LIMIT 20;
Error: SYSTEM ERROR: EOFException
[Error Id: fd784c1a-8353-430a-9ae3-08a5154755fe on xxx.com:31010]
(org.apache.drill.exec.work.foreman.ForemanException) Unexpected exception during fragment initialization: Failed to create schema tree: End of File Exception between local host is: "xxx.com/10.95.112.80"; destination host is: "yyy.com":38234; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
org.apache.drill.exec.work.foreman.Foreman.run():262
java.util.concurrent.ThreadPoolExecutor.runWorker():1145
java.util.concurrent.ThreadPoolExecutor$Worker.run():615
java.lang.Thread.run():745
Caused By (org.apache.drill.common.exceptions.DrillRuntimeException) Failed to create schema tree: End of File Exception between local host is: "xxx.com/10.95.112.80"; destination host is: "yyy.com":38234; : java.io.EOFException; For more details see: http://wiki.apache.org/hadoop/EOFException
org.apache.drill.exec.ops.QueryContext.getRootSchema():169
org.apache.drill.exec.ops.QueryContext.getRootSchema():151
...

yet another Could not contact Elasticsearch at http://logstash.example.com:9200

i have installed logstash+elasticsearch+kibana into one host and received the error from the title. I have googled all over the related topics, still no luck and yet stuck.
I will share the configs i have made:
elasticsearch.yml
cluster.name: hive
node.name: "logstash-central"
network.bind_host: 10.1.1.25
output from /var/log/elasticsearch/hive.log
[2015-01-13 15:18:06,562][INFO ][node ] [logstash-central] initializing ...
[2015-01-13 15:18:06,566][INFO ][plugins ] [logstash-central] loaded [], sites []
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] initialized
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] starting ...
[2015-01-13 15:18:09,385][INFO ][transport ] [logstash-central] bound_address {inet[/10.1.1.25:9300]}, publish_address {inet[/10.1.1.25:9300]}
[2015-01-13 15:18:09,401][INFO ][discovery ] [logstash-central] hive/T2LZruEtRsGPAF_Cx3BI1A
[2015-01-13 15:18:13,173][INFO ][cluster.service ] [logstash-central] new_master [logstash-central][T2LZruEtRsGPAF_Cx3BI1A][logstash.tw.intra][inet[/10.1.1.25:9300]], reason: zen-disco-join (elected_as_master)
[2015-01-13 15:18:13,193][INFO ][http ] [logstash-central] bound_address {inet[/10.1.1.25:9200]}, publish_address {inet[/10.1.1.25:9200]}
[2015-01-13 15:18:13,194][INFO ][node ] [logstash-central] started
[2015-01-13 15:18:13,209][INFO ][gateway ] [logstash-central] recovered [0] indices into cluster_state
accessing logstash.example.com:9200 gives the ordinary output like in ES guide:
{
"status" : 200,
"name" : "logstash-central",
"cluster_name" : "hive",
"version" : {
"number" : "1.4.2",
"build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c",
"build_timestamp" : "2014-12-16T14:11:12Z",
"build_snapshot" : false,
"lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"
}
accessing http://logstash.example.com:9200/_status? gives the following:
{"_shards":{"total":0,"successful":0,"failed":0},"indices":{}}
Kibanas config.js is default:
elasticsearch: "http://"+window.location.hostname+":9200"
Kibana is used via nginx. Here is /etc/nginx/conf.d/nginx.conf:
server {
listen *:80 ;
server_name logstash.example.com;
location / {
root /usr/share/kibana3;
Logstash config file is /etc/logstash/conf.d/central.conf:
input {
redis {
host => "10.1.1.25"
type => "redis-input"
data_type => "list"
key => "logstash"
}
output {
stdout{ { codec => rubydebug } }
elasticsearch {
host => "logstash.example.com"
}
}
Redis is working and the traffic passes between the master and slave (i've checked it via tcpdump).
15:46:06.189814 IP 10.1.1.50.41617 > 10.1.1.25.6379: Flags [P.], seq 89560:90064, ack 1129, win 115, options [nop,nop,TS val 3572086227 ecr 3571242836], length 504
netstat -apnt shows the following:
tcp 0 0 10.1.1.25:6379 10.1.1.50:41617 ESTABLISHED 21112/redis-server
tcp 0 0 10.1.1.25:9300 10.1.1.25:44011 ESTABLISHED 22598/java
tcp 0 0 10.1.1.25:9200 10.1.1.35:51145 ESTABLISHED 22598/java
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22379/nginx
Could you please tell which way should i investigate the issue?
Thanks in advance
The problem is likely due to the nginx setup and the fact that Kibana, while installed on your server, is running in your browser and trying to access Elasticsearch from there. The typical way this is solved is by setting up a proxy in nginx and then changing your config.js.
You have what appears to be a correct proxy set up for nginx for Kibana but you'll need some additional work to have kibana be able to access Elasticsearch.
Check the comments on this post: http://vichargrave.com/ossec-log-management-with-elasticsearch/
And check this post: https://groups.google.com/forum/#!topic/elasticsearch/7hPvjKpFcmQ
And this sample nginx config: https://github.com/johnhamelink/ansible-kibana/blob/master/templates/nginx.conf.j2
You'll have to precise the protocol for elasticsearch in the output section
elasticsearch {
host => "logstash.example.com"
protocol => 'http'
}

Why does a websocket in PebbleKit JS cause iOS app to crash?

I am attempting to log Pebble accelerometer data to my computer. I was hoping to use PebbleKit JS to communicate with the watch itself, and then use websockets to send that data to my computer, but currently the websocket only sends one message and then the iOS app crashes.
Here is the content of pebble-js-app.js:
var ws = new WebSocket('ws://192.168.1.134:8888');
// Called when JS is ready
Pebble.addEventListener("ready",
function(e) {
console.log("js initialized");
}
);
// Called when incoming message from the Pebble is received
Pebble.addEventListener("appmessage",
function(e) {
console.log(ws.readyState);
if (ws.readyState === 1) {
ws.send('this gets sent');
ws.send(e.payload.message);
}
console.log("acc data: " + e.payload.message);
}
);
And here is the log that I get when I run the app:
[INFO ] Enabling application logging...
[INFO ] Displaying logs ... Ctrl-C to interrupt.
[INFO ] JS: starting app: 4C74DC80-A54E-4D0B-9BAB-FD355D364334 accelero
[INFO ] app is ready: 1
[INFO ] JS: accelerometer: js initialized
[INFO ] JS: accelerometer: 0
[INFO ] JS: accelerometer: acc data: 3131
[INFO ] JS: accelerometer: 1
[ERROR ] Lost connection to Pebble
The websocket server running on my computer logs the first message sent from the javascript. Only one message is sent, even when the order is changed (e.g. it prints either this gets sent or a single actual reading from the accelerometer). Curiously, the built-in logging (console.log) from javascript started only printing 3131 for the received data, even when the websocket sends some valid data. Can you see any errors in the code or do you have any other suggestions?

Resources