openstack cinder cant create,and cinder list ,openstack volume service list show "The server is currently unavailable." - openstack-cinder

i have installed cinder in controller node and block node.
i test the status of openstack-cinder-scheduler and openstack-cinder-api(on controller node) ,openstack-cinder-volume and target.service(on block node),that they are running.
but when i use the "cinder list,cinder create,openstack volume service list"only can get one kind of output:
[root#controller //]# openstack volume service list
The server is currently unavailable. Please try again at a later time.
The Keystone service is temporarily unavailable.
(HTTP 503)
[root#controller //]# cinder list
ERROR: The server is currently unavailable. Please try again at a later time.
The Keystone service is temporarily unavailable.
i have check the configure in cinder.conf,nova.conf serveral times.i have no idea with them.can u give a suggestion? thank you.

I found that I can create volume by dashboard, while by command line I can't. It may be something wrong between admin and cinder user. Try it again.
[root#controller //]# openstack volume list
+--------------------------------------+----------+-----------+------+-------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+----------+-----------+------+-------------+
| 435acaae-44d8-4793-9a0c-61a4436b6b37 | volumev4 | available | 1 | |
+--------------------------------------+----------+-----------+------+-------------+
[root#controller //]# openstack role add --project service --user cinder admin
[root#controller //]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2021-12-28T07:40:01.000000 |
| cinder-volume | compute2#lvm | nova | enabled | up | 2021-12-28T07:40:07.000000 |
+------------------+--------------+------+---------+-------+----------------------------+
it has the output.

Related

Provision: how to set the device name?

20 days ago I've successfully provisioned for ESP-32 and work fine with this device.
Today I've successfully provisioned the second ESP-32 chip on another computer:
5.40 MiB / 5.40 MiB [------------------------------------] 100.00% 14.69 MiB p/s
looking for available hardware identities on disk
no hardware identities found on disk, claiming new hardware identity
Flashing device on port /dev/ttyUSB0
+--------------------------+--------------------------------------+
| SETTING | VALUE |
+--------------------------+--------------------------------------+
| Firmware | v1.0.2 |
| Device Model | esp32-4mb |
| Hardware ID | XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX |
| Hardware Batch & Seq. No | 2020-11-10#524 |
| context | remote |
| broker.host | device.toit.io |
| broker.cn | device.toit.io |
| broker.port | 9426 |
| wifi.ssid | SureDemo |
| wifi.password | suremote |
+--------------------------+--------------------------------------+
erasing device flash
successfully erased device flash
writing device partitions
successfully written device partitions
reading hardware chip information
successfully read hardware chip information
+--------------------------+--------------------------------------+
| SETTING | VALUE |
+--------------------------+--------------------------------------+
| factory device model | esp32-4mb |
| factory firmware version | v1.0.2 |
| chip ID | |
+--------------------------+--------------------------------------+
device was successfully flashed
However, I cannot start the application on this device:
michael_k # michaelk: ~ /toit_apps/Hsm2/tests $ toit run test_hsm_switch_async_4.toit
No default device set. Provide the device name (with the --device flag) to the command
michael_k # michaelk: ~ /toit_apps/Hsm2/tests $
I realized that this new device needs to be given a different name from my default device micrcx-1. By the way, I can see my first appliance:
michael_k # michaelk: ~ /toit_apps/Hsm2/tests $ toit devices
+--------------------------------------+----------+-------------------+----------+
| DEVICE ID | NAME | LAST SEEN | FIRMWARE |
+--------------------------------------+----------+-------------------+----------+
| XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX | micrcx-1 | Apr 29 2021 04:05 | v1.0.2 |
+--------------------------------------+----------+-------------------+----------+
michael_k#michaelk:~/toit_apps/Hsm2/tests$
So, the question is: how to give a name to a new additional device and how to run an application on it?
Thanks in advance, MK
PS. Naturally, I could be wrong, but as far as I remember, the name of the first device was assigned by toit system automatically. I had nothing to do with this. micrcx is my computer's identifier.
It might be that your device wasn't claimed yet.
In the current release (but hopefully not in future releases), provisioning a device only puts the Toit framework on the device. At this point it is not yet associated with your account and must be claimed.
You can simply run:
toit device claim <hardware-ID> or toit device claim <hardware-ID> --name=<some-name>.
If no name is provided, then the system generates one. Typically these are built out of two words, for example nervous-plastic. You can always change the names at a later point.
Alternatively you can claim the device in the web UI. There is a "CLAIM OR REPLACE DEVICE" button on the top right of the "Devices" view.
FYI: I have edited your post to remove the hardware ID of the new device, so nobody else claims the device in the meantime.

Playbooks that needs to be tweaked depending on the targeted environment

In our company we're using ansible to target different environments, about 10 on the development and integration side, and 3 on the production side. Those environments have some differences in terms of the number of resources dedicated. For example let say in production we have a standalone server or vm dedicated to run the jvm, another one for dB... And in certain other environments those applications share the same server or vm. The problem is that some playbooks needs to be tweaked in order to target such an environment, so how could we do to make it as generic and transparent as possible?
Thanks for your advices
what you want to achieve is putting good hosts in good groups.
if you're familiar with ansible-inventory --graph, here is what you want to achieve:
#all:
|--#dev:
| |--devel1.domain.org
| |--devel2.domain.org
|--#db:
| |--db1.domain.org
| |--db2.domain.org
| |--dbprod1.domain.org
| |--appprod2.domain.org
|--#app:
| |--app1.domain.org
| |--app2.domain.org
| |--appprod1.domain.org
| |--appprod2.domain.org
|--#dev1:
| |--devel1.domain.org
| |--db1.domain.org
| |--app1.domain.org
|--#dev2:
| |--devel2.domain.org
| |--db2.domain.org
| |--app2.domain.org
|--#prod:
| |--#prod1:
| | |--dbprod1.domain.org
| | |--appprod1.domain.org
| |--#prod2:
| | |--appprod2.domain.org
here the host appprod2.domain.org will inherit from group_vars of app,db, prod,prod2

LXD: The "default" storage pool doesn't exist

I am getting the following error while I am trying to edit default lxc profile:
The "default" storage pool doesn't exist
/snap/bin/lxd init
or
lxd init
solved the issue, but select yes for following:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Then you can confirm with the following command and the output:
/snap/bin/lxc storage list
+---------+-------------+--------+--------------------------------------------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+---------+-------------+--------+--------------------------------------------+---------+
| default | | zfs | /var/snap/lxd/common/lxd/disks/default.img | 1 |
+---------+-------------+--------+--------------------------------------------+---------+
Following this document will help.

Can a 'test' user delete a MapR table? What are the permission needs to be granted to test user to delete a MapR table?

I have created a new MapRDB table with the below privilege.
"adminaccessperm":"u:root | u:mapr | u:test",
"deletefamilyperm":"u:root | u:mapr | u:test",
"defaultappendperm":"u:root | u:mapr | u:test",
"defaultreadperm":"u:root | u:mapr | u:test",
"defaultwriteperm":"u:root | u:mapr | u:test",
I am trying to delete the table as a test user. I am not able to delete the table.
I am getting the below error.
ERROR: User test(user id 503) does not have access to /EDMEVENT/events_1
You have to have full control permissions in order to delete. You can modify cluster permissions using the acl set and acl edit commands, or using the MapR Control System. Detailed steps can be found Cluster and Volume Permissions

What are the valid instanceState's for the Amazon EC2 API?

What are the valid instanceState's for the Amazon EC2 API? It doesn't seem to be defined in the current API doc. Google doesn't turn up much. So far I know about:
0: pending
16: running
32: shutting-down
48: terminated
but I'm pretty sure I've seen an error state before.
Thanks!
As of posting, the current states are:
+--------+---------------+
| Code | State |
+--------+---------------+
| 0 | pending |
| 16 | running |
| 32 | shutting-down |
| 48 | terminated |
| 64 | stopping |
| 80 | stopped |
+--------+---------------+
And the documentation can be found within the Amazon Elastic Compute Cloud API reference under InstanceStateType
The docs also mention a state code 272 which "typically indicates a problem with the host running the instance". They suggest trying a reboot in the first instance, and posting on the EC2 forums if that doesn't solve the issue.
The docs seem to have been moved, and I still can't find them. Here are two more state codes
64: stopping
80: stopped

Resources