Bonjour a tous, nouvellement inscris sur le site et ayant pas mal parcourue le forum pour comprendre mon problème, mon NAS a planter et executer une reconstruction de données (pendant 11 heure environ) maintenant je suis bloquer avec la restaurations des données , en effet le volume lourd indique bien que des données sont toujours présente sur les 4 DD mais impossible a retrouver, je vous envoie les resultats des cmd si vous avez une idée pour essayer de les recuperer ? Merci a Vous ! 
[~] # qcli_storage
Enclosure Port Sys_Name Size Type RAID RAID_Type Pool TMeta VolType VolName
NAS_HOST 1 /dev/sda 3.64 TB data /dev/md1 RAID 5,64 1 16 GB flexible DataVol1
NAS_HOST 2 /dev/sdb 3.64 TB data /dev/md1 RAID 5,64 1 16 GB flexible DataVol1
NAS_HOST 3 /dev/sdc 3.64 TB data /dev/md1 RAID 5,64 1 16 GB flexible DataVol1
NAS_HOST 4 /dev/sdd 3.64 TB data /dev/md1 RAID 5,64 1 16 GB flexible DataVol1
[~] # qcli_storage -d
Enclosure Port Sys_Name Type Size Alias Signature Partitions Model
NAS_HOST 1 /dev/sda HDD:data 3.64 TB -- QNAP FLEX 5 Seagate ST4000DM004-2U9104
NAS_HOST 2 /dev/sdb HDD:data 3.64 TB -- QNAP FLEX 5 Seagate ST4000DM004-2CV104
NAS_HOST 3 /dev/sdc HDD:data 3.64 TB -- QNAP 5 Seagate ST4000DM004-2CV104
NAS_HOST 4 /dev/sdd HDD:data 3.64 TB -- QNAP FLEX 5 Seagate ST4000DM004-2CV104
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdd3[4] sdb3[0] sdc3[3] sda3[5]
11691190848 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md322 : active raid1 sdd5[3](S) sdc5[2](S) sdb5[1] sda5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdb4[0] sdd4[2] sdc4[3] sda4[32]
458880 blocks super 1.0 [32/4] [UUUU____________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sdb1[0] sdd1[2] sdc1[3] sda1[32]
530048 blocks super 1.0 [32/4] [UUUU____________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
[~] # md_checker
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
HAL firmware detected!
Scanning Enclosure 0...
RAID metadata found!
UUID: 68fc061a:d6d1a9bd:6133ee04:18c26eb2
Level: raid5
Devices: 4
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: May 30 12:00:43 2020
Status: ONLINE (md1) [UUUU]
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
2 /dev/sdb3 0 Active Jul 7 06:32:42 2025 13346100 AAAA
1 /dev/sda3 1 Active Jul 7 06:32:42 2025 13346100 AAAA
4 /dev/sdd3 2 Active Jul 7 06:32:42 2025 13346100 AAAA
3 /dev/sdc3 3 Active Jul 7 06:32:42 2025 13346100 AAAA
===============================================================================
[~] # pvs -a
PV VG Fmt Attr PSize PFree
/dev/md1 vg1 lvm2 a-- 10.89t 0
/dev/md13 --- 0 0
/dev/md256 --- 0 0
/dev/md322 --- 0 0
/dev/md9 --- 0 0
[~] # lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv1 vg1 Vwi-aot--- 10.76t tp1 100.00
lv544 vg1 -wi------- 111.49g
tp1 vg1 twi-aot--- 10.76t 99.96 2.17
[tp1_tdata] vg1 Twi-ao---- 10.76t
[tp1_tmeta] vg1 ewi-ao---- 16.00g
[~] # ls -alh /dev/mapper/
total 0
drwxr-xr-x 2 admin administrators 180 2025-07-06 09:43 ./
drwxr-xr-x 13 admin administrators 20K 2025-07-07 06:16 ../
brw------- 1 admin administrators 253, 0 2025-07-06 09:43 cachedev1
crw------- 1 admin administrators 10, 236 2025-07-06 10:42 control
brw------- 1 admin administrators 253, 5 2025-07-06 09:43 vg1-lv1
brw------- 1 admin administrators 253, 4 2025-07-06 09:43 vg1-tp1
brw------- 1 admin administrators 253, 2 2025-07-06 09:43 vg1-tp1_tdata
brw------- 1 admin administrators 253, 1 2025-07-06 09:43 vg1-tp1_tmeta
brw------- 1 admin administrators 253, 3 2025-07-06 09:43 vg1-tp1-tpool
[~] #


Message automatiquement fusionné :
[~] # qcli_storage
Enclosure Port Sys_Name Size Type RAID RAID_Type Pool TMeta VolType VolName
NAS_HOST 1 /dev/sda 3.64 TB data /dev/md1 RAID 5,64 1 16 GB flexible DataVol1
NAS_HOST 2 /dev/sdb 3.64 TB data /dev/md1 RAID 5,64 1 16 GB flexible DataVol1
NAS_HOST 3 /dev/sdc 3.64 TB data /dev/md1 RAID 5,64 1 16 GB flexible DataVol1
NAS_HOST 4 /dev/sdd 3.64 TB data /dev/md1 RAID 5,64 1 16 GB flexible DataVol1
[~] # qcli_storage -d
Enclosure Port Sys_Name Type Size Alias Signature Partitions Model
NAS_HOST 1 /dev/sda HDD:data 3.64 TB -- QNAP FLEX 5 Seagate ST4000DM004-2U9104
NAS_HOST 2 /dev/sdb HDD:data 3.64 TB -- QNAP FLEX 5 Seagate ST4000DM004-2CV104
NAS_HOST 3 /dev/sdc HDD:data 3.64 TB -- QNAP 5 Seagate ST4000DM004-2CV104
NAS_HOST 4 /dev/sdd HDD:data 3.64 TB -- QNAP FLEX 5 Seagate ST4000DM004-2CV104
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sdd3[4] sdb3[0] sdc3[3] sda3[5]
11691190848 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
md322 : active raid1 sdd5[3](S) sdc5[2](S) sdb5[1] sda5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md256 : active raid1 sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdb4[0] sdd4[2] sdc4[3] sda4[32]
458880 blocks super 1.0 [32/4] [UUUU____________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
md9 : active raid1 sdb1[0] sdd1[2] sdc1[3] sda1[32]
530048 blocks super 1.0 [32/4] [UUUU____________________________]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
[~] # md_checker
Welcome to MD superblock checker (v1.4) - have a nice day~
Scanning system...
HAL firmware detected!
Scanning Enclosure 0...
RAID metadata found!
UUID: 68fc061a:d6d1a9bd:6133ee04:18c26eb2
Level: raid5
Devices: 4
Name: md1
Chunk Size: 64K
md Version: 1.0
Creation Time: May 30 12:00:43 2020
Status: ONLINE (md1) [UUUU]
===============================================================================
Disk | Device | # | Status | Last Update Time | Events | Array State
===============================================================================
2 /dev/sdb3 0 Active Jul 7 06:32:42 2025 13346100 AAAA
1 /dev/sda3 1 Active Jul 7 06:32:42 2025 13346100 AAAA
4 /dev/sdd3 2 Active Jul 7 06:32:42 2025 13346100 AAAA
3 /dev/sdc3 3 Active Jul 7 06:32:42 2025 13346100 AAAA
===============================================================================
[~] # pvs -a
PV VG Fmt Attr PSize PFree
/dev/md1 vg1 lvm2 a-- 10.89t 0
/dev/md13 --- 0 0
/dev/md256 --- 0 0
/dev/md322 --- 0 0
/dev/md9 --- 0 0
[~] # lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv1 vg1 Vwi-aot--- 10.76t tp1 100.00
lv544 vg1 -wi------- 111.49g
tp1 vg1 twi-aot--- 10.76t 99.96 2.17
[tp1_tdata] vg1 Twi-ao---- 10.76t
[tp1_tmeta] vg1 ewi-ao---- 16.00g
[~] # ls -alh /dev/mapper/
total 0
drwxr-xr-x 2 admin administrators 180 2025-07-06 09:43 ./
drwxr-xr-x 13 admin administrators 20K 2025-07-07 06:16 ../
brw------- 1 admin administrators 253, 0 2025-07-06 09:43 cachedev1
crw------- 1 admin administrators 10, 236 2025-07-06 10:42 control
brw------- 1 admin administrators 253, 5 2025-07-06 09:43 vg1-lv1
brw------- 1 admin administrators 253, 4 2025-07-06 09:43 vg1-tp1
brw------- 1 admin administrators 253, 2 2025-07-06 09:43 vg1-tp1_tdata
brw------- 1 admin administrators 253, 1 2025-07-06 09:43 vg1-tp1_tmeta
brw------- 1 admin administrators 253, 3 2025-07-06 09:43 vg1-tp1-tpool
[~] #
Message automatiquement fusionné :
