QNAP Problème assemblage / superblock /etc

  • Vague de SPAM

    Suite à une vague de spam sur le forum, les inscriptions sont temporairement limitées.

    Après votre inscription, un membre de l'équipe devra valider votre compte avant qu'il ne soit activé. Nous sommes désolés pour la gêne occasionnée et vous remercions de votre patience.

Niminwedam

Nouveau membre
19 Janvier 2026
11
0
1
Bonsoir,

Je viens vers vous ce soir pour, je l'espère trouver une solution à mon problème.

Il y a environ 1an et demi, j'ai changé avec succès un disque qui présentait un avertissement depuis un certain temps.
Un second s'est mis en avertissement dès la fin de la reconstruction. Ayant prévu, j'ai lancé immédiatement son remplacement mais il ne s'est pas passé comme prévu.
Un premier contrôle a été fait , c'était un problème de superblock dont les backup ne passaient pas.
Faute de temps, j'ai mis en Off prévoyant de regarder pus tard.
Plus tard c'est maintenant car j'ai besoin des données dans un cadre pro (que des photos)

Le matériel est un TS669Pro.
Il était configuré comme suis :
Un pool de 5 disques en RAID 5
Un pool de 1 disque en RAID 1 (opérationnel)

A date, je ne peux plus assembler le pool et par conséquent plus le démarrer.

Voici un état global de la situation :
HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID: fe21c23d:ebf6c87c:38c0d0e5:d7dac770
Level: raid5
Devices: 5
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Oct 17 08:10:24 2024
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
-------------- 0 Missing -------------------------------------------
-------------- 1 Missing -------------------------------------------
-------------- 2 Missing -------------------------------------------
-------------- 3 Missing -------------------------------------------
4 /dev/sdc3 4 Active Jan 11 23:20:37 2026 6776 AAAAA
===============================================================================


RAID metadata found!
UUID: 888e41b0:7874222e:f3bb2e2d:91a2ee5d
Level: raid1
Devices: 1
Name: md2
Chunk Size: -
md Version: 1.0
Creation Time: Aug 1 12:39:42 2020
Status: ONLINE (md2)
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
6 /dev/sde3 0 Active Jan 20 20:34:01 2026 2 A
===============================================================================


Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid1 sde3[0]
1943559616 blocks super 1.0 [1/1]

md322 : active raid1 sde5[1] sdc5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdg4[36] sda4[32] sde4[33] sdd4[35] sdb4[34]
458880 blocks super 1.0 [32/5] [UUUU_U__________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdg1[0] sda1[34] sde1[33] sdd1[32] sdb1[2]
530048 blocks super 1.0 [32/5] [UUUU_U__________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>


Enclosure Port Sys_Name Type Size Alias Signature Partitions Model
NAS_HOST 1 /dev/sdg HDD:free 1.82 TB -- QNAP 4 TOSHIBA DT01ACA200
NAS_HOST 2 /dev/sda HDD:free 3.64 TB -- QNAP 4 Seagate ST4000VN006-3CW104
NAS_HOST 3 /dev/sdb HDD:free 1.82 TB -- QNAP 4 TOSHIBA DT01ACA200
NAS_HOST 4 /dev/sdc HDD:free 1.82 TB -- QNAP FLEX 5 TOSHIBA DT01ACA200
NAS_HOST 5 /dev/sdd HDD:free 3.64 TB -- QNAP 4 Seagate ST4000DM004-2CV104
NAS_HOST 6 /dev/sde HDD:data 1.82 TB -- QNAP FLEX 5 TOSHIBA DT01ACA200

pvs
PV VG Fmt Attr PSize PFree
/dev/md2 vg2 lvm2 a-- 1.81t 0

vgs
VG #PV #LV #SN Attr VSize VFree
vg2 1 3 0 wz--n- 1.81t 0

lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv2 vg2 Vwi-aot--- 1.50t tp2 100.00
lv545 vg2 -wi------- 18.54g
tp2 vg2 twi-aot--- 1.78t 84.44 0.31

--- Volume group ---
VG Name vg2
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 69
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.81 TiB
PE Size 4.00 MiB
Total PE 474501
Alloc PE / Size 474501 / 1.81 TiB
Free PE / Size 0 / 0
VG UUID jCvjOy-0Jyh-MPoy-j0XE-G0Er-n4cK-bzUz86
Il n'y a aucune erreur de copie, j'ai perdu toutes les informations de volume et de pool de la grappe corrompue.

Là ou ca me pose un problème c'est là :
Enclosure Port Sys_Name Size Type RAID RAID_Type Pool TMeta VolType VolName
NAS_HOST 1 /dev/sdg 1.82 TB free -- -- -- -- -- --
NAS_HOST 2 /dev/sda 3.64 TB free -- -- -- -- -- --
NAS_HOST 3 /dev/sdb 1.82 TB free -- -- -- -- -- --
NAS_HOST 4 /dev/sdc(X) 1.82 TB free /dev/md1(X) RAID 5 -- -- -- --
NAS_HOST 5 /dev/sdd 3.64 TB free -- -- -- -- -- --
NAS_HOST 6 /dev/sde 1.82 TB data /dev/md2 Single 2 16 GB flexible DataVol2

Error info :
/dev/md1 : need to be recovered.
Mais..
/dev/sda1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 56c5eff4:d31436ab:179bfbca:e69f1fc2
Name : 9
Creation Time : Sun Jan 20 23:26:05 2019
Raid Level : raid1 <--Pourquoi RAID 1 ?
Raid Devices : 32

Avail Dev Size : 1060216 (517.77 MiB 542.83 MB)
Array Size : 530048 (517.71 MiB 542.77 MB)
Used Dev Size : 1060096 (517.71 MiB 542.77 MB)
Super Offset : 1060232 sectors
Unused Space : before=0 sectors, after=120 sectors
State : clean
Device UUID : 5f3ff6b3:980ac1b0:2f11a666:247969f5

Internal Bitmap : -16 sectors from superblock
Update Time : Tue Jan 20 20:38:12 2026
Bad Block Log : 512 entries available at offset -8 sectors
Checksum : 5cf83e95 - correct
Events : 1355340


Device Role : Active device 1
Array State : AAAA.A.......................... ('A' == active, '.' == missing, 'R' == replacing)

J'ai tenté de réassembler sans succès, manuellement les disques sont busy

Impossible de lancer testdisk puisque /dev/md1 n'est plus répertorié....

Avez-vous des idées ?

Merci pour votre aide :)
 
Salut,
D'apres tes retours on voit bien la trace du RAID 5 ici :
Code:
RAID metadata found!
UUID: fe21c23d:ebf6c87c:38c0d0e5:d7dac770
Level: raid5
Devices: 5
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: Oct 17 08:10:24 2024
Status: OFFLINE
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
-------------- 0 Missing -------------------------------------------
-------------- 1 Missing -------------------------------------------
-------------- 2 Missing -------------------------------------------
-------------- 3 Missing -------------------------------------------
4 /dev/sdc3 4 Active Jan 11 23:20:37 2026 6776 AAAAA
===============================================================================

Avec une date de création d'octobre 2024 et seul sdc semble encore en faire parti correctement.

Pour /dev/sda1 le mode RAID1 est normal car il s'agit d'une partition système. La partition de données est sda3.

Pourrai tu me fournir le retour des deux commandes :
Code:
parted /dev/sda print
Code:
parted /dev/sdc print
 
Bonjour,

Merci pour ton retour.
Le résultat était le même sur sda3, j'ai omis de le préciser.

Voici le retour :
[~] # parted /dev/sda print

Model: Seagate ST4000VN006-3CW1 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 4000GB 4001GB 402MB ext3


[~] # parted /dev/sdc print
Model: TOSHIBA DT01ACA200 (scsi)
Disk /dev/sdc: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 1991GB 1990GB primary
4 1991GB 1992GB 543MB ext3 primary
5 1992GB 2000GB 8554MB linux-swap(v1) primary

Y'a plus de système de fichier sur le 3.
Je comprends mieux pourquoi mdadm me parlait d'absence de superblock lors de l'assembage !
 
Salut,
A la limite tu peut essayer de re-créer les partitions sur le disque sda pour voir si le superblock ré-apparait. Vu qu'il n'y a plus rien, tu ne risque pas grand chose ...

Pour cela, il te faut regarder dans : /etc/enclosure_0.conf le WWN du disque sda ( u peut valider avec les info comme le modele / SN / .. ), puis tu reporte le WWN en regardant dans /mnt/HDA_ROOT/.conf pour valider le "chemin"

Si tu as un doute, post enclosure_0.conf et .conf ici je te confirmerai

Si par exemple sda est sur l'adresse 0x2, tu repartionne comme ceci :

Code:
cd /tmp

Code:
wget https://download.qnap.com/Storage/tsd/utility/QTS_5.0.1_repartition_utility/disk_util-x86_64 -O disk_util; chmod a+x disk_util

Code:
./disk_util --legacy_disk_init dev_id=0x2
( 0x2 étant donc l'adresse du disque que tu as confirmé apres avoir comparé les deux fichiers de config !)
 
Je confirme avoir un doute, certains adressages étant les mêmes pour plusieurs WWN

[Index]
pd_bitmap = 0x7e
pd_sysid_sg0 = 2
pd_sysid_sda = 2
pd_sysid_sg1 = 3
pd_sysid_sdb = 3
pd_sysid_sg2 = 4
pd_sysid_sdc = 4
pd_sysid_sg3 = 5
pd_sysid_sdd = 5
pd_sysid_sg4 = 6
pd_sysid_sde = 6
pd_sysid_sg6 = 1
pd_sysid_sdg = 1

[PhysicalDisk_2]
port_id = 2
pd_sys_id = sg0
enc_sys_id = root
pd_sys_name = /dev/sda
pd_ctrl_name = /dev/sg0
pd_bus_name = 7:0:0:0
wwn = 5000C500E94AD656
real_wwn = 5000C500E94AD656
type = 0
form = 2
vendor = Seagate
model = ST4000VN006-3CW104
serial_no = ZW62YK0D
status = 0
capabilities = 198999
sector_size = 512
capacity = 7814037168
revision = SC60
protocol_ver = 196717
link_speed = 8
max_link_speed = 8
pd_max_link_speed = 0
read_speed = none
rotation_speed = 5400

[PhysicalDisk_3]
port_id = 3
pd_sys_id = sg1
enc_sys_id = root
pd_sys_name = /dev/sdb
pd_ctrl_name = /dev/sg1
pd_bus_name = 8:0:0:0
wwn = 5000039FFAC549BC
real_wwn = 5000039FFAC549BC
type = 0
form = 2
vendor = TOSHIBA
model = DT01ACA200
serial_no = 54IBMM7GS
status = 0
capabilities = 198999
sector_size = 512
capacity = 3907029168
revision = MX4OABB0


[PhysicalDisk_4]
port_id = 4
pd_sys_id = sg2
enc_sys_id = root
pd_sys_name = /dev/sdc
pd_ctrl_name = /dev/sg2
pd_bus_name = 9:0:0:0
wwn = 5000039FFAC549A5
real_wwn = 5000039FFAC549A5
type = 0
form = 2
vendor = TOSHIBA
model = DT01ACA200
serial_no = 54IBMLHGS
status = 0
capabilities = 198999
sector_size = 512
capacity = 3907029168
revision = MX4OABB0
protocol_ver = 524329
link_speed = 8
max_link_speed = 8
pd_max_link_speed = 0
read_speed = none
rotation_speed = 7200

[PhysicalDisk_5]
port_id = 5
pd_sys_id = sg3
enc_sys_id = root
pd_sys_name = /dev/sdd
pd_ctrl_name = /dev/sg3
pd_bus_name = 10:0:0:0
wwn = 5000C500A293F66A
real_wwn = 5000C500A293F66A
type = 0
form = 0
vendor = Seagate
model = ST4000DM004-2CV104
serial_no = Z9703YFY
status = 0
capabilities = 198743
sector_size = 512
capacity = 7814037168
revision = 0001
protocol_ver = 655469
link_speed = 8
max_link_speed = 8
pd_max_link_speed = 0
read_speed = 178.35 MB/sec
rotation_speed = 5425

[PhysicalDisk_6]
port_id = 6
pd_sys_id = sg4
enc_sys_id = root
pd_sys_name = /dev/sde
pd_ctrl_name = /dev/sg4
pd_bus_name = 11:0:0:0
wwn = 5000039FFAC72D63
real_wwn = 5000039FFAC72D63
type = 0
form = 2
vendor = TOSHIBA
model = DT01ACA200
serial_no = 643HTG8GS
status = 0
capabilities = 198999
sector_size = 512
capacity = 3907029168
revision = MX4OABB0
protocol_ver = 524329
link_speed = 8
max_link_speed = 8
pd_max_link_speed = 0
read_speed = 184.56 MB/sec
rotation_speed = 7200

[PhysicalDisk_1]
port_id = 1
pd_sys_id = sg6
enc_sys_id = root
pd_sys_name = /dev/sdg
pd_ctrl_name = /dev/sg6
pd_bus_name = 6:0:0:0
wwn = 5000039FFAC54A46
real_wwn = 5000039FFAC54A46
type = 0
form = 2
vendor = TOSHIBA
model = DT01ACA200
serial_no = 54IBMSPGS
status = 0
capabilities = 198999
sector_size = 512
capacity = 3907029168
revision = MX4OABB0
protocol_ver = 524329
link_speed = 8
max_link_speed = 8
pd_max_link_speed = 0
read_speed = none
rotation_speed = 7200

hw_addr = 00:08:9B:E0:6C:9C
QNAP = TRUE
mirror = 0
hal_support = yes
sm_v2_support = yes
pd_dev_wwn_5000039FFAC54A46 = 0x1
pd_dev_wwn_5000039FFAC549D1 = 0x2
pd_dev_wwn_5000039FFAC549BC = 0x3
pd_dev_wwn_5000039FFAC549A5 = 0x4
nas_capability = 0x1
pd_dev_wwn_5000C500A293F66A = 0x5
pd_dev_wwn_5000039FFAC72D63 = 0x6
pd_dev_wwn_5000C500E94AD656 = 0x2
pd_dev_wwn_5000C500E94B0F57 = 0x4
pd_err_wwn_5000039FFAC549A5 = 0x4
 
Dans ton cas c'est bien 0x2

, donc tu fait :

Code:
cd /tmp

Code:
wget https://download.qnap.com/Storage/tsd/utility/QTS_5.0.1_repartition_utility/disk_util-x86_64 -O disk_util; chmod a+x disk_util

Code:
./disk_util --legacy_disk_init dev_id=0x2

Puis regarde si le superblock est de retour :
Code:
mdadm -E /dev/sda3

Regarde aussi le partitionnement des autres disques
 
Ha oui, c'est vrai que ton NAS est bloqué en QTS 4.3.4

Le mieux est d'essayer de reproduire le partitionnement de sdc sur sda je pense

Pourrai tu me donner le résultats de la commande suivante :
Code:
parted /dev/sdc unit s print
 
Bonjour,
Désolé, je suis ce weekend à distance et SSH n'était pus accessible, problème réglé.

Voici le retour :
[~] # parted /dev/sdc unit s print
Model: TOSHIBA DT01ACA200 (scsi)
Disk /dev/sdc: 3907029168s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 40s 1060289s 1060250s ext3 primary
2 1060296s 2120579s 1060284s linux-swap(v1) primary
3 2120584s 3889240109s 3887119526s primary
4 3889240112s 3890300399s 1060288s ext3 primary
5 3890300408s 3907007999s 16707592s linux-swap(v1) primary
 
je pense qu'il y a une chance sur 1000 que cela fonctionne, mais cela se tente.

Alors, il faut supprimer la partition existante sur sda :
Code:
parted -s /dev/sda rm 1

Puis re-créer les 5 partitions en ce basant sur sdc :
Code:
parted -s /dev/sda mkpart primary 40s 1060250s

Code:
parted -s /dev/sda mkpart primary 1060296s 2120579s

Code:
parted -s /dev/sda mkpart primary 2120584s 3889240109s

Code:
parted -s /dev/sda mkpart primary 3889240112s 3890300399s

Code:
parted -s /dev/sda mkpart primary 3890300408s 3907007999s

Ensuite, regarde si le résultat est ok :
Code:
parted /dev/sda print

et si le superblock est visible :
Code:
mdadm -E /dev/sda3
 
Impossible, 1 & 4 sont en cours d'utilisation

[~] # parted -s /dev/sda rm 1
Error: Partition(s) 1, 4 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
 
Aurai tu monté la partition sda1 ?

Donne moi le retour de :
Code:
df -h

Aussi re-regarde :
Code:
parted /dev/sda print

Je comprends pas pourquoi il parle de partition 4 alors qu'il n'y a que une partition sur le disque.
 
a ma connaissance rien de monté sauf s'il est utillisé par le système

[~] # df -h
Filesystem Size Used Available Use% Mounted on
none 200.0M 184.2M 15.8M 92% /
devtmpfs 486.4M 8.0K 486.4M 0% /dev
tmpfs 64.0M 436.0K 63.6M 1% /tmp
tmpfs 492.4M 0 492.4M 0% /dev/shm
tmpfs 16.0M 0 16.0M 0% /share
/dev/md9 515.5M 166.0M 349.5M 32% /mnt/HDA_ROOT
cgroup_root 492.4M 0 492.4M 0% /sys/fs/cgroup
/dev/mapper/cachedev2
1.5T 596.6G 926.8G 39% /share/CACHEDEV2_DATA
/dev/md13 371.0M 345.1M 25.9M 93% /mnt/ext
/dev/ram2 433.9M 2.3M 431.6M 1% /mnt/update
tmpfs 64.0M 2.3M 61.7M 4% /samba
tmpfs 16.0M 24.0K 16.0M 0% /samba/.samba/lock/msg.l ock
tmpfs 16.0M 0 16.0M 0% /mnt/ext/opt/samba/priva te/msg.sock
tmpfs 1.0M 0 1.0M 0% /mnt/rf/nd
[~] # parted /dev/sda print
Model: Seagate ST4000VN006-3CW1 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags

par scuriosité j'ai regardé :
[~] # mdadm --detail /dev/md13
/dev/md13:
Version : 1.0
Creation Time : Sun Jan 20 23:26:18 2019
Raid Level : raid1
Array Size : 458880 (448.20 MiB 469.89 MB)
Used Dev Size : 458880 (448.20 MiB 469.89 MB)
Raid Devices : 32
Total Devices : 5
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Jan 24 15:04:03 2026
State : active, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Name : 13
UUID : d6ba6382:14d5a98d:8711bfbb:46a4d52b
Events : 144542

Number Major Minor RaidDevice State
32 8 4 0 active sync /dev/sda4
34 8 20 1 active sync /dev/sdb4
35 8 52 2 active sync /dev/sdd4
36 8 100 3 active sync /dev/sdg4
8 0 0 8 removed
33 8 68 5 active sync /dev/sde4
12 0 0 12 removed
14 0 0 14 removed
16 0 0 16 removed
18 0 0 18 removed
20 0 0 20 removed
22 0 0 22 removed
24 0 0 24 removed
26 0 0 26 removed
28 0 0 28 removed
30 0 0 30 removed
32 0 0 32 removed
34 0 0 34 removed
36 0 0 36 removed
38 0 0 38 removed
40 0 0 40 removed
42 0 0 42 removed
44 0 0 44 removed
46 0 0 46 removed
48 0 0 48 removed
50 0 0 50 removed
52 0 0 52 removed
54 0 0 54 removed
56 0 0 56 removed
58 0 0 58 removed
60 0 0 60 removed
62 0 0 62 removed
 
Visiblement il a bien supprimé la partition :
Code:
[~] # parted /dev/sda print
Model: Seagate ST4000VN006-3CW1 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags

essaye donc la suite des commandes pour créer les partitions
 
Ca passe, mais pas de superblock

[~] # parted -s /dev/sda mkpart primary 40s 1060250s
Warning: The resulting partition is not properly aligned for best performance.
Error: Partition(s) 1, 4 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
[~] # parted -s /dev/sda mkpart primary 1060296s 2120579s
Warning: The resulting partition is not properly aligned for best performance.
Error: Partition(s) 1, 4 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
[~] # parted -s /dev/sda mkpart primary 2120584s 3889240109s
Warning: The resulting partition is not properly aligned for best performance.
Error: Partition(s) 1, 4 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
[~] # parted -s /dev/sda mkpart primary 3889240112s 3890300399s
Warning: The resulting partition is not properly aligned for best performance.
Error: Partition(s) 1, 4 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
[~] # parted -s /dev/sda mkpart primary 3890300408s 3907007999s
Warning: The resulting partition is not properly aligned for best performance.
Error: Partition(s) 1, 4 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
[~] # parted /dev/sda print
Model: Seagate ST4000VN006-3CW1 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 1991GB 1990GB primary
4 1991GB 1992GB 543MB primary
5 1992GB 2000GB 8554MB primary

[~] # mdadm -E /dev/sda3
mdadm: No md superblock detected on /dev/sda3.
 
  • Triste
Réactions: EVO
Mince :(
A la limite tu peut essayer la même chose sur g, b et d . Dans un RAID5 s'il en manque un cela n’empêche pas de reconstruire, ... cependant a ce stade je n'y crois plus trop.
 
il trouve cependant des backup e superblock sur sda3.
Je vais essayer les autres.
Au mieux avec un peu de chances, peut être que je retrouverais md1 dans testdisk
 
tout est fait :
Model: TOSHIBA DT01ACA200 (scsi)
Disk /dev/sdg: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 2000GB 1999GB primary
4 2000GB 2000GB 510MB ext3 primary

[~] # mdadm -E /dev/sdg3
mdadm: No md superblock detected on /dev/sdg3.

Model: TOSHIBA DT01ACA200 (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 2000GB 1999GB primary
4 2000GB 2000GB 510MB ext3 primary

[~] # mdadm -E /dev/sdb3
mdadm: No md superblock detected on /dev/sdb3.

Model: Seagate ST4000DM004-2CV1 (scsi)
Disk /dev/sdd: 4001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 4000GB 3999GB primary
4 4000GB 4001GB 510MB ext3 primary

[~] # mdadm -E /dev/sdd3
mdadm: No md superblock detected on /dev/sdd3.

il y eu quelques "erreurs" qui sont certainement normales car les disques n'avaient pas la meme taile :
[~] # parted -s /dev/sdg mkpart primary 40s 1060250s
Warning: The resulting partition is not properly aligned for best performance.
Error: Partition(s) 1 on /dev/sdg have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
[~] # parted -s /dev/sdg mkpart primary 1060296s 2120579s
Error: You requested a partition from 543MB to 1086MB (sectors 1060296..2120579).
The closest location we can manage is 543MB to 543MB (sectors 1060295..1060295).
[~] # parted -s /dev/sdg mkpart primary 2120584s 3889240109s
Error: You requested a partition from 1086MB to 1991GB (sectors 2120584..3889240109).
The closest location we can manage is 1086MB to 1086MB (sectors 2120583..2120583).
[~] # parted -s /dev/sdg mkpart primary 3889240112s 3890300399s
Error: You requested a partition from 1991GB to 1992GB (sectors 3889240112..3890300399).
The closest location we can manage is 2000GB to 2000GB (sectors 3906011970..3906011970).
[~] # parted -s /dev/sdg mkpart primary 3890300408s 3907007999s
Error: You requested a partition from 1992GB to 2000GB (sectors 3890300408..3907007999).
The closest location we can manage is 2000GB to 2000GB (sectors 3906011970..3906011975).
[~] # parted -s /dev/sdd mkpart primary 40s 1060250s
Warning: The resulting partition is not properly aligned for best performance.
Error: Partition(s) 1 on /dev/sdd have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
[~] # parted -s /dev/sdd mkpart primary 1060296s 2120579s
Error: You requested a partition from 543MB to 1086MB (sectors 1060296..2120579).
The closest location we can manage is 543MB to 543MB (sectors 1060295..1060295).
[~] # parted -s /dev/sdd mkpart primary 2120584s 3889240109s
Error: You requested a partition from 1086MB to 1991GB (sectors 2120584..3889240109).
The closest location we can manage is 1086MB to 1086MB (sectors 2120583..2120583).
[~] # parted -s /dev/sdd mkpart primary 3889240112s 3890300399s
Error: You requested a partition from 1991GB to 1992GB (sectors 3889240112..3890300399).
The closest location we can manage is 4000GB to 4000GB (sectors 7813019970..7813019970).
[~] # parted -s /dev/sdd mkpart primary 3890300408s 3907007999s
Error: You requested a partition from 1992GB to 2000GB (sectors 3890300408..3907007999).
The closest location we can manage is 4000GB to 4000GB (sectors 7813019970..7813019970).