本產品的文件集力求使用無偏見用語。針對本文件集的目的,無偏見係定義為未根據年齡、身心障礙、性別、種族身分、民族身分、性別傾向、社會經濟地位及交織性表示歧視的用語。由於本產品軟體使用者介面中硬式編碼的語言、根據 RFP 文件使用的語言,或引用第三方產品的語言,因此本文件中可能會出現例外狀況。深入瞭解思科如何使用包容性用語。
思科已使用電腦和人工技術翻譯本文件,讓全世界的使用者能夠以自己的語言理解支援內容。請注意,即使是最佳機器翻譯,也不如專業譯者翻譯的內容準確。Cisco Systems, Inc. 對這些翻譯的準確度概不負責,並建議一律查看原始英文文件(提供連結)。
本檔案介紹如何在Cisco Ultra服務平台(UltraM)中恢復虛擬化封包核心(VPC)的虛擬機器(VM),在此之前無法訪問虛擬機器一段時間,且思科彈性服務控制器(ESC)嘗試恢復虛擬機器。
在UltraM設定中已刪除(或無法訪問)計算節點。當ESC嘗試恢復節點時,由於無法訪問該節點,因此無法恢復。離開計算節點的電源線時,可以模擬此情境。模擬此情境的方法之一是從整合運算系統(UCS)刀鋒移除電源線。當ESC無法恢復VM後,它進入OpenStack下的「錯誤」狀態並保持「引導」。
在本示例中,SF卡5位於vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c上:
[local]rcdn-ulram-lab# show card table
Slot Card Type Oper State SPOF Attach
----------- -------------------------------------- ------------- ---- ------
1: CFC Control Function Virtual Card Standby -
2: CFC Control Function Virtual Card Active No
3: FC 1-Port Service Function Virtual Card Active No
4: FC 1-Port Service Function Virtual Card Active No
5: FC 1-Port Service Function Virtual Card Booting -
6: FC 1-Port Service Function Virtual Card Active No
7: FC 1-Port Service Function Virtual Card Active No
[stack@ultram-ospd ~]$ nova list
+--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+------------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+------------------------------------------------------------------------------------------------------------+
| beab0296-8cfa-4b63-8a05-a800637199f5 | Testcompanion | ACTIVE | - | Running | testcomp-gn=10.10.11.8; mgmt=172.16.181.18, 10.201.206.46; testcomp-sig=10.10.13.5; testcomp-gi=10.10.12.7 |
| 235f5591-9502-4ba3-a003-b254494d258b | auto-deploy-ISO-590-uas-0 | ACTIVE | - | Running | mgmt=172.16.181.11, 10.201.206.44 |
| 9450cb19-f073-476b-a750-9336b26e3c6a | auto-it-vnf-ISO-590-uas-0 | ACTIVE | - | Running | mgmt=172.16.181.8, 10.201.206.43 |
| d0d91636-951d-49db-a92b-b2a639f5db9d | autovnf1-uas-0 | ACTIVE | - | Running | orchestr=172.16.180.14; mgmt=172.16.181.13 |
| 901f30e2-e96e-4658-9e1e-39a45b5859c7 | autovnf1-uas-1 | ACTIVE | - | Running | orchestr=172.16.180.5; mgmt=172.16.181.12 |
| 9edb3a8d-a69b-4912-86f6-9d0b05d6210d | autovnf1-uas-2 | ACTIVE | - | Running | orchestr=172.16.180.16; mgmt=172.16.181.5 |
| 56ce362c-3494-4106-98e3-ba06e56ee4ed | ultram-vnfm1-ESC-0 | ACTIVE | - | Running | orchestr=172.16.180.9; mgmt=172.16.181.6, 10.201.206.55 |
| bb687399-e1f9-44b2-a258-cfa29dcf178e | ultram-vnfm1-ESC-1 | ACTIVE | - | Running | orchestr=172.16.180.15; mgmt=172.16.181.7 |
| bfc4096c-4ff7-4b30-af3f-5bc3810b30e3 | ultram-vnfm1-em_ultram_0_9b5ccf05-c340-44da-9bca-f5af4689ea42 | ACTIVE | - | Running | orchestr=172.16.180.7; mgmt=172.16.181.14 |
| cf7ddc9e-5e6d-4e38-a606-9dc9d31c559d | ultram-vnfm1-em_ultram_0_c2533edd-8756-44fb-a8bf-98b9c10bfacd | ACTIVE | - | Running | orchestr=172.16.180.8; mgmt=172.16.181.15 |
| 592b5b3f-0b0b-4bc6-81e7-a8cc9a609594 | ultram-vnfm1-em_ultram_0_ce0c37a0-509e-45d1-9d00-464988e02730 | ACTIVE | - | Running | orchestr=172.16.180.6; mgmt=172.16.181.10 |
| 143baf4f-024a-47f1-969a-d4d79d89be14 | vnfd1-deployment_c1_0_84c5bc9e-9d80-4628-b88a-f8a0011b5d4b | ACTIVE | - | Running | orchestr=172.16.180.26; ultram-vnfm1-di-internal1=192.168.1.13; mgmt=172.16.181.25 |
| b74a0365-3be1-4bee-b1cc-e454d5b0cd11 | vnfd1-deployment_c2_0_66bac767-39fe-4972-b877-7826468a762e | ACTIVE | - | Running | orchestr=172.16.180.10; ultram-vnfm1-di-internal1=192.168.1.5; mgmt=172.16.181.20, 10.201.206.45 |
| 59a02ec2-bed6-4ad8-81ff-e8a922742f7b | vnfd1-deployment_s3_0_f9f6b7a6-1458-4b22-b40f-33f8af3500b8 | ACTIVE | - | Running | ultram-vnfm1-service-network1=10.10.10.4; orchestr=172.16.180.17; ultram-vnfm1-di-internal1=192.168.1.6 |
| 52e9a2b0-cf2c-478d-baea-f4a5f3b7f327 | vnfd1-deployment_s4_0_8c78cfd9-57c5-4394-992a-c86393187dd0 | ACTIVE | - | Running | ultram-vnfm1-service-network1=10.10.10.11; orchestr=172.16.180.20; ultram-vnfm1-di-internal1=192.168.1.3 |
| bd7c6600-3e8f-4c09-a35c-89921bbf1b35 | vnfd1-deployment_s5_0_f1c48ea1-4a91-4098-86f6-48e172e23c83 | ACTIVE | - | Running | ultram-vnfm1-service-network1=10.10.10.12; orchestr=172.16.180.13; ultram-vnfm1-di-internal1=192.168.1.2 |
| 085baf6a-02bf-4190-ac38-bbb33350b941 | vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c | ERROR | - | NOSTATE | |
| ea03767f-5dd9-43ed-8e9d-603590da2580 | vnfd1-deployment_s7_0_e887d8b1-7c98-4f60-b343-b0be7b387b32 | ACTIVE | - | Running | ultram-vnfm1-service-network1=10.10.10.10; orchestr=172.16.180.18; ultram-vnfm1-di-internal1=192.168.1.9 |
+--------------------------------------+---------------------------------------------------------------+--------+------------+-------------+------------------------------------------------------------------------------------------------------------+
當ESC嘗試恢復出現故障的VM後,它將在OpenStack中將該VM標籤為失敗例項,並且從現在起不會重試恢復。
以下是ESC中虛擬機器恢復失敗的日誌:
15:11:04,617 11-Aug-2017 WARN ===== SEND NOTIFICATION STARTS =====
15:11:04,617 11-Aug-2017 WARN Type: VM_RECOVERY_INIT
15:11:04,617 11-Aug-2017 WARN Status: SUCCESS
15:11:04,617 11-Aug-2017 WARN Status Code: 200
15:11:04,617 11-Aug-2017 WARN Status Msg: Recovery event for VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] triggered.
15:11:04,617 11-Aug-2017 WARN Tenant: core
15:11:04,617 11-Aug-2017 WARN Service ID: NULL
15:11:04,617 11-Aug-2017 WARN Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
15:11:04,617 11-Aug-2017 WARN Deployment name: vnfd1-deployment-1.0.0-1
15:11:04,617 11-Aug-2017 WARN VM group name: s6
15:11:04,618 11-Aug-2017 WARN VM Source:
15:11:04,618 11-Aug-2017 WARN VM ID: 4d6b1b6f-6137-4e8e-b61c-66d5fb59ba0d
15:11:04,618 11-Aug-2017 WARN Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73
15:11:04,618 11-Aug-2017 WARN Host Name: ultram-rcdnlab-compute-4.localdomain
15:11:04,618 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
15:11:04,618 11-Aug-2017 WARN ===== SEND NOTIFICATION ENDS =====
15:16:38,019 11-Aug-2017 WARN
15:16:38,020 11-Aug-2017 WARN ===== SEND NOTIFICATION STARTS =====
15:16:38,020 11-Aug-2017 WARN Type: VM_RECOVERY_REBOOT
15:16:38,020 11-Aug-2017 WARN Status: FAILURE
15:16:38,020 11-Aug-2017 WARN Status Code: 500
15:16:38,020 11-Aug-2017 WARN Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] failed to be rebooted.
15:16:38,020 11-Aug-2017 WARN Tenant: core
15:16:38,020 11-Aug-2017 WARN Service ID: NULL
15:16:38,020 11-Aug-2017 WARN Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
15:16:38,020 11-Aug-2017 WARN Deployment name: vnfd1-deployment-1.0.0-1
15:16:38,020 11-Aug-2017 WARN VM group name: s6
15:16:38,021 11-Aug-2017 WARN VM Source:
15:16:38,021 11-Aug-2017 WARN VM ID: 4d6b1b6f-6137-4e8e-b61c-66d5fb59ba0d
15:16:38,021 11-Aug-2017 WARN Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73
15:16:38,021 11-Aug-2017 WARN Host Name: ultram-rcdnlab-compute-4.localdomain
15:16:38,021 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
15:16:38,021 11-Aug-2017 WARN ===== SEND NOTIFICATION ENDS =====
15:16:48,286 11-Aug-2017 WARN
15:16:48,286 11-Aug-2017 WARN ===== SEND NOTIFICATION STARTS =====
15:16:48,286 11-Aug-2017 WARN Type: VM_RECOVERY_UNDEPLOYED
15:16:48,286 11-Aug-2017 WARN Status: SUCCESS
15:16:48,286 11-Aug-2017 WARN Status Code: 204
15:16:48,286 11-Aug-2017 WARN Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] has been undeployed.
15:16:48,286 11-Aug-2017 WARN Tenant: core
15:16:48,286 11-Aug-2017 WARN Service ID: NULL
15:16:48,286 11-Aug-2017 WARN Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
15:16:48,286 11-Aug-2017 WARN Deployment name: vnfd1-deployment-1.0.0-1
15:16:48,286 11-Aug-2017 WARN VM group name: s6
15:16:48,286 11-Aug-2017 WARN VM Source:
15:16:48,286 11-Aug-2017 WARN VM ID: 4d6b1b6f-6137-4e8e-b61c-66d5fb59ba0d
15:16:48,286 11-Aug-2017 WARN Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73
15:16:48,286 11-Aug-2017 WARN Host Name: ultram-rcdnlab-compute-4.localdomain
15:16:48,287 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
15:16:48,287 11-Aug-2017 WARN ===== SEND NOTIFICATION ENDS =====
15:18:04,418 11-Aug-2017 WARN
15:18:04,418 11-Aug-2017 WARN ===== SEND NOTIFICATION STARTS =====
15:18:04,418 11-Aug-2017 WARN Type: VM_RECOVERY_COMPLETE
15:18:04,418 11-Aug-2017 WARN Status: FAILURE
15:18:04,418 11-Aug-2017 WARN Status Code: 500
15:18:04,418 11-Aug-2017 WARN Status Msg: Error deploying VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] as part of recovery workflow. VIM Driver: VM booted in ERROR state in Openstack: No valid host was found. There are not enough hosts available.
15:18:04,418 11-Aug-2017 WARN Tenant: core
15:18:04,418 11-Aug-2017 WARN Service ID: NULL
15:18:04,418 11-Aug-2017 WARN Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
15:18:04,418 11-Aug-2017 WARN Deployment name: vnfd1-deployment-1.0.0-1
15:18:04,418 11-Aug-2017 WARN VM group name: s6
15:18:04,418 11-Aug-2017 WARN VM Source:
15:18:04,418 11-Aug-2017 WARN VM ID: 4d6b1b6f-6137-4e8e-b61c-66d5fb59ba0d
15:18:04,418 11-Aug-2017 WARN Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73
15:18:04,418 11-Aug-2017 WARN Host Name: ultram-rcdnlab-compute-4.localdomain
15:18:04,418 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
15:18:04,418 11-Aug-2017 WARN ===== SEND NOTIFICATION ENDS =====
1.開啟電腦的電源按鈕並等待虛擬機器監控程式啟動:
[root@ultram-ospd ~]# su - stack
[stack@ultram-ospd ~]$ source stackrc
[stack@ultram-ospd ~]$ nova hypervisor-list
+----+---------------------------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------------------------+-------+---------+
| 3 | ultram-rcdnlab-compute-10.localdomain | up | enabled |
| 6 | ultram-rcdnlab-compute-5.localdomain | up | enabled |
| 9 | ultram-rcdnlab-compute-6.localdomain | up | enabled |
| 12 | ultram-rcdnlab-compute-3.localdomain | up | enabled |
| 15 | ultram-rcdnlab-compute-9.localdomain | up | enabled |
| 18 | ultram-rcdnlab-compute-1.localdomain | up | enabled |
| 21 | ultram-rcdnlab-compute-8.localdomain | up | enabled |
| 24 | ultram-rcdnlab-compute-4.localdomain | down | enabled |
| 27 | ultram-rcdnlab-compute-7.localdomain | up | enabled |
| 30 | ultram-rcdnlab-compute-2.localdomain | up | enabled |
| 33 | ultram-rcdnlab-compute-0.localdomain | up | enabled |
+----+---------------------------------------+-------+---------+
[stack@ultram-ospd ~]$ nova hypervisor-list
+----+---------------------------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------------------------+-------+---------+
| 3 | ultram-rcdnlab-compute-10.localdomain | up | enabled |
| 6 | ultram-rcdnlab-compute-5.localdomain | up | enabled |
| 9 | ultram-rcdnlab-compute-6.localdomain | up | enabled |
| 12 | ultram-rcdnlab-compute-3.localdomain | up | enabled |
| 15 | ultram-rcdnlab-compute-9.localdomain | up | enabled |
| 18 | ultram-rcdnlab-compute-1.localdomain | up | enabled |
| 21 | ultram-rcdnlab-compute-8.localdomain | up | enabled |
| 24 | ultram-rcdnlab-compute-4.localdomain | up | enabled |
| 27 | ultram-rcdnlab-compute-7.localdomain | up | enabled |
| 30 | ultram-rcdnlab-compute-2.localdomain | up | enabled |
| 33 | ultram-rcdnlab-compute-0.localdomain | up | enabled |
+----+---------------------------------------+-------+---------+
2.識別新星清單中的例項ID:
[root@ultram-ospd ~]# su - stack
[stack@ultram-ospd ~]$ source corerc
[stack@ultram-ospd ~]$ nova list | grep ERROR
| 085baf6a-02bf-4190-ac38-bbb33350b941 | vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c | ERROR | - | NOSTATE |
3.使用上一步中的CLI和例項ID在ESC上啟動手動恢復:
[admin@ultram-vnfm1-esc-0 ~]$ cd /opt/cisco/esc/esc-confd/esc-cli
[admin@ultram-vnfm1-esc-0 esc-cli]$ ./esc_nc_cli recovery-vm-action DO vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c
Recovery VM Action
/opt/cisco/esc/confd/bin/netconf-console --port=830 --host=127.0.0.1 --user=admin --privKeyFile=/home/admin/.ssh/confd_id_dsa --privKeyType=dsa --rpc=/tmp/esc_nc_cli.hZsdLQ2Mle
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<ok/>
</rpc-reply>
4.檢查日誌,使用OpenStack Horizon確認例項已恢復:
[admin@ultram-vnfm1-esc-0 ~]$ tail -f /var/log/esc/yangesc.log
16:41:54,445 11-Aug-2017 INFO ===== RECOVERY VM ACTION REQUEST RECEIVED =====
16:41:54,445 11-Aug-2017 INFO Type: DO
16:41:54,445 11-Aug-2017 INFO Recovery VM name: vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c
16:41:58,092 11-Aug-2017 INFO ===== RECOVERY VM ACTION REQUEST ACCEPTED =====
16:41:58,673 11-Aug-2017 WARN
16:41:58,673 11-Aug-2017 WARN ===== SEND NOTIFICATION STARTS =====
16:41:58,674 11-Aug-2017 WARN Type: VM_RECOVERY_INIT
16:41:58,674 11-Aug-2017 WARN Status: SUCCESS
16:41:58,674 11-Aug-2017 WARN Status Code: 200
16:41:58,674 11-Aug-2017 WARN Status Msg: Recovery event for VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] triggered.
16:41:58,674 11-Aug-2017 WARN Tenant: core
16:41:58,674 11-Aug-2017 WARN Service ID: NULL
16:41:58,674 11-Aug-2017 WARN Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
16:41:58,674 11-Aug-2017 WARN Deployment name: vnfd1-deployment-1.0.0-1
16:41:58,674 11-Aug-2017 WARN VM group name: s6
16:41:58,674 11-Aug-2017 WARN VM Source:
16:41:58,674 11-Aug-2017 WARN VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
16:41:58,674 11-Aug-2017 WARN Host ID:
16:41:58,674 11-Aug-2017 WARN Host Name:
16:41:58,674 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
16:41:58,674 11-Aug-2017 WARN ===== SEND NOTIFICATION ENDS =====
16:42:19,794 11-Aug-2017 WARN
16:42:19,794 11-Aug-2017 WARN ===== SEND NOTIFICATION STARTS =====
16:42:19,794 11-Aug-2017 WARN Type: VM_RECOVERY_REBOOT
16:42:19,794 11-Aug-2017 WARN Status: FAILURE
16:42:19,794 11-Aug-2017 WARN Status Code: 500
16:42:19,794 11-Aug-2017 WARN Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] failed to be rebooted.
16:42:19,794 11-Aug-2017 WARN Tenant: core
16:42:19,795 11-Aug-2017 WARN Service ID: NULL
16:42:19,795 11-Aug-2017 WARN Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
16:42:19,795 11-Aug-2017 WARN Deployment name: vnfd1-deployment-1.0.0-1
16:42:19,795 11-Aug-2017 WARN VM group name: s6
16:42:19,795 11-Aug-2017 WARN VM Source:
16:42:19,795 11-Aug-2017 WARN VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
16:42:19,795 11-Aug-2017 WARN Host ID:
16:42:19,795 11-Aug-2017 WARN Host Name:
16:42:19,795 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
16:42:19,795 11-Aug-2017 WARN ===== SEND NOTIFICATION ENDS =====
16:42:32,013 11-Aug-2017 WARN
16:42:32,013 11-Aug-2017 WARN ===== SEND NOTIFICATION STARTS =====
16:42:32,013 11-Aug-2017 WARN Type: VM_RECOVERY_UNDEPLOYED
16:42:32,013 11-Aug-2017 WARN Status: SUCCESS
16:42:32,013 11-Aug-2017 WARN Status Code: 204
16:42:32,013 11-Aug-2017 WARN Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] has been undeployed.
16:42:32,013 11-Aug-2017 WARN Tenant: core
16:42:32,014 11-Aug-2017 WARN Service ID: NULL
16:42:32,014 11-Aug-2017 WARN Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
16:42:32,014 11-Aug-2017 WARN Deployment name: vnfd1-deployment-1.0.0-1
16:42:32,014 11-Aug-2017 WARN VM group name: s6
16:42:32,014 11-Aug-2017 WARN VM Source:
16:42:32,014 11-Aug-2017 WARN VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
16:42:32,014 11-Aug-2017 WARN Host ID:
16:42:32,014 11-Aug-2017 WARN Host Name:
16:42:32,014 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
16:42:32,014 11-Aug-2017 WARN ===== SEND NOTIFICATION ENDS =====
16:43:13,643 11-Aug-2017 WARN
16:43:13,643 11-Aug-2017 WARN ===== SEND NOTIFICATION STARTS =====
16:43:13,643 11-Aug-2017 WARN Type: VM_RECOVERY_DEPLOYED
16:43:13,643 11-Aug-2017 WARN Status: SUCCESS
16:43:13,643 11-Aug-2017 WARN Status Code: 200
16:43:13,643 11-Aug-2017 WARN Status Msg: VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c] has been deployed as part of recovery.
16:43:13,643 11-Aug-2017 WARN Tenant: core
16:43:13,643 11-Aug-2017 WARN Service ID: NULL
16:43:13,643 11-Aug-2017 WARN Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
16:43:13,643 11-Aug-2017 WARN Deployment name: vnfd1-deployment-1.0.0-1
16:43:13,643 11-Aug-2017 WARN VM group name: s6
16:43:13,643 11-Aug-2017 WARN VM Source:
16:43:13,643 11-Aug-2017 WARN VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
16:43:13,643 11-Aug-2017 WARN Host ID:
16:43:13,643 11-Aug-2017 WARN Host Name:
16:43:13,643 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
16:43:13,643 11-Aug-2017 WARN VM Target:
16:43:13,644 11-Aug-2017 WARN VM ID: a313e8dc-3b0f-4b41-8648-f9b9419bc826
16:43:13,644 11-Aug-2017 WARN Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73
16:43:13,644 11-Aug-2017 WARN Host Name: ultram-rcdnlab-compute-4.localdomain
16:43:13,644 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
16:43:13,644 11-Aug-2017 WARN ===== SEND NOTIFICATION ENDS =====
16:43:33,827 11-Aug-2017 WARN
16:43:33,827 11-Aug-2017 WARN ===== SEND NOTIFICATION STARTS =====
16:43:33,827 11-Aug-2017 WARN Type: VM_RECOVERY_COMPLETE
16:43:33,827 11-Aug-2017 WARN Status: SUCCESS
16:43:33,827 11-Aug-2017 WARN Status Code: 200
16:43:33,827 11-Aug-2017 WARN Status Msg: Recovery: Successfully recovered VM [vnfd1-deployment_s6_0_e03f87f5-63b6-4053-8d0f-0c9df963162c].
16:43:33,827 11-Aug-2017 WARN Tenant: core
16:43:33,827 11-Aug-2017 WARN Service ID: NULL
16:43:33,828 11-Aug-2017 WARN Deployment ID: b41ad0ec-bc74-4bb3-85b6-7ef430074187
16:43:33,828 11-Aug-2017 WARN Deployment name: vnfd1-deployment-1.0.0-1
16:43:33,828 11-Aug-2017 WARN VM group name: s6
16:43:33,828 11-Aug-2017 WARN VM Source:
16:43:33,828 11-Aug-2017 WARN VM ID: 085baf6a-02bf-4190-ac38-bbb33350b941
16:43:33,828 11-Aug-2017 WARN Host ID:
16:43:33,828 11-Aug-2017 WARN Host Name:
16:43:33,828 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
16:43:33,828 11-Aug-2017 WARN VM Target:
16:43:33,828 11-Aug-2017 WARN VM ID: a313e8dc-3b0f-4b41-8648-f9b9419bc826
16:43:33,828 11-Aug-2017 WARN Host ID: 20b7df6d083651eb04f1f014e8a4958ddf9c1654cb3ad9057adc7e73
16:43:33,828 11-Aug-2017 WARN Host Name: ultram-rcdnlab-compute-4.localdomain
16:43:33,828 11-Aug-2017 WARN [DEBUG-ONLY] VM IP: 10.10.10.9; 172.16.180.22; 192.168.1.12;
16:43:33,828 11-Aug-2017 WARN ===== SEND NOTIFICATION ENDS =====
[local]rcdn-ulram-lab# show card table
Slot Card Type Oper State SPOF Attach
----------- -------------------------------------- ------------- ---- ------
1: CFC Control Function Virtual Card Standby -
2: CFC Control Function Virtual Card Active No
3: FC 1-Port Service Function Virtual Card Active No
4: FC 1-Port Service Function Virtual Card Active No
5: FC 1-Port Service Function Virtual Card Standby -
6: FC 1-Port Service Function Virtual Card Active No
7: FC 1-Port Service Function Virtual Card Active No
[local]rcdn-ulram-lab#