QNAP Volume en erreur

flacon030

Nouveau membre
Membre Confirmé
28 Décembre 2024
13
3
3
Bonjour a tous
J'ai un petit problème avec mon qnap
J'ai deux volumes unique que j'avais crées sous le nom de "TimeMachine" et TimeMachine2" qui sons en erreur, se sont des volumes statique sur les disques 5 et 6
Je voudrais les supprimer, mais impossible quant je clique sur un des deux volumes et que je veut le supprimer la page se charge et reste bloqué a 12%
Je ne peut donc pas les supprimer
qnap1.png

chose curieuse aussi les deux disque sont reconnue comme libre (occupé) en groupe raid 4 et 5
qnap2.png

y a t'il une solution pour les supprimer en ssh

ou bien je dois me résoudre a faire un reset usine
 
Code:
[flacon030@Qnap-855 ~]$ md_checker


Welcome to MD superblock checker (v2.0) - have a nice day~


/usr/bin/md_checker: line 25: /mnt/HDA_ROOT/md_backup_2024-12-28_21.20.49: Permission denied

Scanning system...


[flacon030@Qnap-855 ~]$

-----------------------------------------------------------------------------------------------------------------------------------

Code:
[flacon030@Qnap-855 ~]$ pvs -a

  WARNING: Running as a non-root user. Functionality may be unavailable.

  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2

  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using /dev/drbd2 not /dev/md2

  Using duplicate PV /dev/drbd2 from subsystem DRBD, ignoring /dev/md2

  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3

  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3

  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3

  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3

  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3

  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3

  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2

  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using existing dev /dev/drbd2

  PV         VG    Fmt  Attr PSize   PFree

  /dev/drbd2 vg1   lvm2 a--    3.45t    0

  /dev/drbd3 vg2   lvm2 a--   10.89t    0

  /dev/md1   vg256 lvm2 a--  410.64g    0

  /dev/md13             ---       0     0

  /dev/md2   vg1   lvm2 a--    3.45t    0

  /dev/md256            ---       0     0

  /dev/md3   vg2   lvm2 a--   10.89t    0

  /dev/md321            ---       0     0

  /dev/md322            ---       0     0

  /dev/md9              ---       0     0

[flacon030@Qnap-855 ~]$



-----------------------------------------------------------------------------------------------------------------------------------------

Code:
[flacon030@Qnap-855 ~]$ pvs -a

  WARNING: Running as a non-root user. Functionality may be unavailable.

  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2

  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using /dev/drbd2 not /dev/md2

  Using duplicate PV /dev/drbd2 from subsystem DRBD, ignoring /dev/md2

  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3

  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3

  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3

  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3

  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3

  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3

  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2

  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using existing dev /dev/drbd2

  PV         VG    Fmt  Attr PSize   PFree

  /dev/drbd2 vg1   lvm2 a--    3.45t    0

  /dev/drbd3 vg2   lvm2 a--   10.89t    0

  /dev/md1   vg256 lvm2 a--  410.64g    0

  /dev/md13             ---       0     0

  /dev/md2   vg1   lvm2 a--    3.45t    0

  /dev/md256            ---       0     0

  /dev/md3   vg2   lvm2 a--   10.89t    0

  /dev/md321            ---       0     0

  /dev/md322            ---       0     0

  /dev/md9              ---       0     0

[flacon030@Qnap-855 ~]$                                                                                                           

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$     19

-sh: 19: command not found

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$     9 427

-sh: 9: command not found

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$     1 884

-sh: 1: command not found

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$     293

-sh: 293: command not found

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$     /var/run/docker.sock

-sh: /var/run/docker.sock: Permission denied

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$     il y a 8 minutes

-sh: il: command not found

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$     Nouveau

-sh: Nouveau: command not found

[flacon030@Qnap-855 ~]$     Ajouter un marque-page

-sh: Ajouter: command not found

[flacon030@Qnap-855 ~]$     #2

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$ Salut

-sh: Salut: command not found

[flacon030@Qnap-855 ~]$ Pourrai tu donner le résultat des commandes suivantes :

-sh: Pourrai: command not found

[flacon030@Qnap-855 ~]$

[flacon030@Qnap-855 ~]$ md_checker


Welcome to MD superblock checker (v2.0) - have a nice day~


/usr/bin/md_checker: line 25: /mnt/HDA_ROOT/md_backup_2024-12-28_21.21.55: Permission denied

Scanning system...


[flacon030@Qnap-855 ~]$ pvs -a

  WARNING: Running as a non-root user. Functionality may be unavailable.

  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2

  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using /dev/drbd2 not /dev/md2

  Using duplicate PV /dev/drbd2 from subsystem DRBD, ignoring /dev/md2

  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3

  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3

  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3

  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3

  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3

  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3

  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2

  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using existing dev /dev/drbd2

  PV         VG    Fmt  Attr PSize   PFree

  /dev/drbd2 vg1   lvm2 a--    3.45t    0

  /dev/drbd3 vg2   lvm2 a--   10.89t    0

  /dev/md1   vg256 lvm2 a--  410.64g    0

  /dev/md13             ---       0     0

  /dev/md2   vg1   lvm2 a--    3.45t    0

  /dev/md256            ---       0     0

  /dev/md3   vg2   lvm2 a--   10.89t    0

  /dev/md321            ---       0     0

  /dev/md322            ---       0     0

  /dev/md9              ---       0     0

[flacon030@Qnap-855 ~]$ lvs -a

  WARNING: Running as a non-root user. Functionality may be unavailable.

  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3

  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3

  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3

  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2

  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using existing dev /dev/drbd2

  LV                      VG    Attr       LSize   Pool Origin                  Data%  Meta%  Move Log Cpy%Sync Convert

  lv1                     vg1   Vwi-aot---   4.00t tp1                          6.65                                 

  lv1312                  vg1   -wi-ao---- 360.00m                                                                   

  lv545                   vg1   -wi-------  35.30g                                                                   

  snap10001               vg1   Vwi-aot---   4.00t tp1  lv1                     6.16                                 

  tp1                     vg1   twi-aot---   3.35t                              8.27   0.09                           

  [tp1_tierdata_0]        vg1   vwi-aov---   4.00m                                                                   

  [tp1_tierdata_1]        vg1   vwi-aov---   4.00m                                                                   

  [tp1_tierdata_2]        vg1   Cwi-aoC---   3.35t      [tp1_tierdata_2_fcorig] 33.36                  0.00           

  [tp1_tierdata_2_fcorig] vg1   owi-aoC---   3.35t                                                                   

  [tp1_tmeta]             vg1   ewi-ao----  64.00g                                                                   

  lv2                     vg2   Vwi-aot---  10.71t tp2                          100.00                               

  lv546                   vg2   -wi------- 111.49g                                                                   

  tp2                     vg2   twi-aot---  10.72t                              99.99  1.12                           

  [tp2_tierdata_0]        vg2   vwi-aov---   4.00m                                                                   

  [tp2_tierdata_1]        vg2   vwi-aov---   4.00m                                                                   

  [tp2_tierdata_2]        vg2   Cwi-aoC---  10.72t      [tp2_tierdata_2_fcorig] 9.17                   77.34         

  [tp2_tierdata_2_fcorig] vg2   owi-aoC---  10.72t                                                                   

  [tp2_tmeta]             vg2   ewi-ao----  64.00g                                                                   

  lv256                   vg256 Cwi-aoC--- 391.53g                              99.96  0.24                           

  [lv256_cdata]           vg256 Cwi-ao---- 391.53g                                                                   

  [lv256_cmeta]           vg256 ewi-ao----  15.00g                                                                   

  lv544                   vg256 -wi-------   4.11g                                                                   

[flacon030@Qnap-855 ~]$
 
Dernière édition:
Merci voici les résultats


Code:
[admin@Qnap-855 ~]# md_checker

Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID:        39f1d8a3:e5ad481e:9ad90e48:12bdbb36
Level:        raid1
Devices:    2
Name:        md1
Chunk Size:    -
md Version:    1.0
Creation Time:    Sep 5 19:00:36 2023
Status:         ONLINE (md1) [UU]
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       1   /dev/nvme0n1p3   0   Active   Dec 28 21:39:58 2024    15037   AA                      
 NAS_HOST       2   /dev/nvme1n1p3   1   Active   Dec 28 21:39:58 2024    15037   AA                      
===============================================================================================


RAID metadata found!
UUID:        f0c129cc:4ef2e383:4bb83f2a:a43dd067
Level:        raid1
Devices:    2
Name:        md2
Chunk Size:    -
md Version:    1.0
Creation Time:    Sep 5 19:02:20 2023
Status:         ONLINE (md2) [UU]
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       3        /dev/sde3   0   Active   Dec 28 21:39:58 2024     5659   AA                      
 NAS_HOST       4        /dev/sdd3   1   Active   Dec 28 21:39:58 2024     5659   AA                      
===============================================================================================


RAID metadata found!
UUID:        bfc80e23:cac1aa77:0183f677:37fd686e
Level:        raid5
Devices:    4
Name:        md3
Chunk Size:    512K
md Version:    1.0
Creation Time:    Sep 5 19:11:40 2023
Status:         ONLINE (md3) [UUUU]
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       5        /dev/sdc3   0   Active   Dec 28 21:39:58 2024    24933   AAAA                    
 NAS_HOST       6        /dev/sdf3   1   Active   Dec 28 21:39:58 2024    24933   AAAA                    
 NAS_HOST       7        /dev/sdb3   2   Active   Dec 28 21:39:58 2024    24933   AAAA                    
 NAS_HOST       8        /dev/sda3   3   Active   Dec 28 21:39:58 2024    24933   AAAA                    
===============================================================================================


RAID metadata found!
UUID:        80760e4a:8172affd:0d9eabc9:e94beb11
Level:        raid1
Devices:    2
Name:        md0
Chunk Size:    -
md Version:    0.90.00
Creation Time:    Sep 2 20:22:14 2010
Status:        OFFLINE
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST      10        /dev/sdg3   0   active   Sep 28 09:02:07 2018 48994503   aa                      
 NAS_HOST       9        /dev/sdh3   1   active   Sep 28 09:02:07 2018 48994503   aa                      
===============================================================================================

[admin@Qnap-855 ~]#


Code:
[admin@Qnap-855 ~]# pvs -a
  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2
  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using /dev/drbd2 not /dev/md2
  Using duplicate PV /dev/drbd2 from subsystem DRBD, ignoring /dev/md2
  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3
  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3
  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3
  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3
  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3
  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3
  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2
  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using existing dev /dev/drbd2
  PV         VG    Fmt  Attr PSize   PFree
  /dev/drbd2 vg1   lvm2 a--    3.45t    0
  /dev/drbd3 vg2   lvm2 a--   10.89t    0
  /dev/md1   vg256 lvm2 a--  410.64g    0
  /dev/md13             ---       0     0
  /dev/md2   vg1   lvm2 a--    3.45t    0
  /dev/md256            ---       0     0
  /dev/md3   vg2   lvm2 a--   10.89t    0
  /dev/md321            ---       0     0
  /dev/md322            ---       0     0
  /dev/md9              ---       0     0
[admin@Qnap-855 ~]#

Code:
[admin@Qnap-855 ~]# lvs -a
  WARNING: duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C is being used from both devices /dev/drbd3 and /dev/md3
  Found duplicate PV w1Dt5InUhUyHBSme58VYNwznRIeoMp5C: using /dev/drbd3 not /dev/md3
  Using duplicate PV /dev/drbd3 from subsystem DRBD, ignoring /dev/md3
  WARNING: duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U is being used from both devices /dev/drbd2 and /dev/md2
  Found duplicate PV BAlsT5bCQt92CGeDsofiZpUpve7ccX3U: using existing dev /dev/drbd2
  LV                      VG    Attr       LSize   Pool Origin                  Data%  Meta%  Move Log Cpy%Sync Convert
  lv1                     vg1   Vwi-aot---   4.00t tp1                          6.65                                  
  lv1312                  vg1   -wi-ao---- 360.00m                                                                    
  lv545                   vg1   -wi-------  35.30g                                                                    
  snap10001               vg1   Vwi-aot---   4.00t tp1  lv1                     6.16                                  
  tp1                     vg1   twi-aot---   3.35t                              8.27   0.09                          
  [tp1_tierdata_0]        vg1   vwi-aov---   4.00m                                                                    
  [tp1_tierdata_1]        vg1   vwi-aov---   4.00m                                                                    
  [tp1_tierdata_2]        vg1   Cwi-aoC---   3.35t      [tp1_tierdata_2_fcorig] 33.35                  0.06          
  [tp1_tierdata_2_fcorig] vg1   owi-aoC---   3.35t                                                                    
  [tp1_tmeta]             vg1   ewi-ao----  64.00g                                                                    
  lv2                     vg2   Vwi-aot---  10.71t tp2                          100.00                                
  lv546                   vg2   -wi------- 111.49g                                                                    
  tp2                     vg2   twi-aot---  10.72t                              99.99  1.12                          
  [tp2_tierdata_0]        vg2   vwi-aov---   4.00m                                                                    
  [tp2_tierdata_1]        vg2   vwi-aov---   4.00m                                                                    
  [tp2_tierdata_2]        vg2   Cwi-aoC---  10.72t      [tp2_tierdata_2_fcorig] 9.16                   70.48          
  [tp2_tierdata_2_fcorig] vg2   owi-aoC---  10.72t                                                                    
  [tp2_tmeta]             vg2   ewi-ao----  64.00g                                                                    
  lv256                   vg256 Cwi-aoC--- 391.53g                              99.95  0.24                          
  [lv256_cdata]           vg256 Cwi-ao---- 391.53g                                                                    
  [lv256_cmeta]           vg256 ewi-ao----  15.00g                                                                    
  lv544                   vg256 -wi-------   4.11g                                                                    
[admin@Qnap-855 ~]#
 
Pourrai tu également donner :
qcli_storage
qcli_storage -d

Tu n'aurai pas ajouté des disques provenant d'un ancien nas qnap dans ce nas ?
 
Code:
[admin@Qnap-855 ~]# qcli_storage
Enclosure Port Sys_Name          Size      Type   RAID        RAID_Type    Pool TMeta  VolType      VolName
NAS_HOST  1    /dev/nvme0n1      465.76 GB cache  /dev/md1    RAID 1       256  --     flexible     Cache Volume
NAS_HOST  2    /dev/nvme1n1      465.76 GB cache  /dev/md1    RAID 1       256  --     flexible     Cache Volume
NAS_HOST  3    /dev/sde          3.64 TB   data   /dev/md2    RAID 1       1    64 GB  flexible     Snap Perso
NAS_HOST  4    /dev/sdd          3.64 TB   data   /dev/md2    RAID 1       1    64 GB  flexible     Snap Perso
NAS_HOST  5    /dev/sdc          3.64 TB   data   /dev/md3    RAID 5,512   2    64 GB  flexible     Data 
NAS_HOST  6    /dev/sdf          3.64 TB   data   /dev/md3    RAID 5,512   2    64 GB  flexible     Data 
NAS_HOST  7    /dev/sdb          3.64 TB   data   /dev/md3    RAID 5,512   2    64 GB  flexible     Data 
NAS_HOST  8    /dev/sda          3.64 TB   data   /dev/md3    RAID 5,512   2    64 GB  flexible     Data 
NAS_HOST  9    /dev/sdh          1.82 TB   free   /dev/md4(X) Single       288(X)--     Static       TimeMachine(X)
NAS_HOST  10   /dev/sdg          1.82 TB   free   /dev/md5(X) Single       289(X)--     Static       TimeMachine2(X)

Error info :
/dev/md4 : need to be recovered.
/dev/md5 : need to be recovered.
[admin@Qnap-855 ~]#

Code:
[admin@Qnap-855 ~]# qcli_storage -d
Enclosure  Port  Sys_Name          Type      Size      Alias                 Signature   Partitions  Model
NAS_HOST   1     /dev/nvme0n1      SSD:cache 465.76 GB M.2 PCIe SSD 1        QNAP FLEX   5           WD Red SN700 500GB
NAS_HOST   2     /dev/nvme1n1      SSD:cache 465.76 GB M.2 PCIe SSD 2        QNAP FLEX   5           WD Red SN700 500GB
NAS_HOST   3     /dev/sde          SSD:data  3.64 TB   2.5" SATA SSD 1       QNAP FLEX   5           WD Red SA500 2.5 4TB
NAS_HOST   4     /dev/sdd          SSD:data  3.64 TB   2.5" SATA SSD 2       QNAP FLEX   5           WD Red SA500 2.5 4TB
NAS_HOST   5     /dev/sdc          HDD:data  3.64 TB   3.5" SATA HDD 1       QNAP FLEX   5           WDC WD40EFPX-68C6CN0
NAS_HOST   6     /dev/sdf          HDD:data  3.64 TB   3.5" SATA HDD 2       QNAP FLEX   5           WDC WD40EFPX-68C6CN0
NAS_HOST   7     /dev/sdb          HDD:data  3.64 TB   3.5" SATA HDD 3       QNAP FLEX   5           WDC WD40EFPX-68C6CN0
NAS_HOST   8     /dev/sda          HDD:data  3.64 TB   3.5" SATA HDD 4       QNAP FLEX   5           WDC WD40EFPX-68C6CN0
NAS_HOST   9     /dev/sdh          HDD:free  1.82 TB   3.5" SATA HDD 5       QNAP FLEX   4           WDC WD20EZRX-00D8PB0
NAS_HOST   10    /dev/sdg          HDD:free  1.82 TB   3.5" SATA HDD 6       QNAP FLEX   4           WDC WD20EZRX-22D8PB0
[admin@Qnap-855 ~]#
 
Dernière édition:
Nous sommes bien d'accord que tu ne souhaite pas conserver les données sur les disques suivants :

Code:
NAS_HOST   9     /dev/sdh          HDD:free  1.82 TB   3.5" SATA HDD 5       QNAP FLEX   4           WDC WD20EZRX-00D8PB0
NAS_HOST   10    /dev/sdg          HDD:free  1.82 TB   3.5" SATA HDD 6       QNAP FLEX   4           WDC WD20EZRX-22D8PB0

Si OUI alors fait cela pour les effacer :

Suppression des superblock :
Code:
mdadm --zero-superblock /dev/sdh3
Code:
mdadm --zero-superblock /dev/sdg3

Re-partitonnement :
Code:
storage_util --disk_init dev_id=9
Code:
storage_util --disk_init dev_id=10
Code:
storv2util --nas-disk-write-signature-v2 enc_id=0,port_id=9
Code:
storv2util --nas-disk-write-signature-v2 enc_id=0,port_id=10

Actualisation de l'interface :
Code:
/etc/init.d/init_lvm.sh
 
Super; merci beaucoup
Il ne me reste plus qu'a mettre en place mes nouveaux volumes TimeMachine
Encore merci pour votre aide précieuse
Cordialement
 
oui les disques sont en cour de d'optimisation
Tous c'est parfaitement bien passé
Encore merci pour votre aide
 
  • J'aime
Réactions: EVO