If you have a large volume (like a disk array or the next-generation SATA disk) and you’re trying to create a single, giant partition for whatever reason, you should know that fdisk (for DOS compatibility reasons, I suppose) cannot create partitions bigger than ~1.5TB, although it won’t throw you any error or complain So if you want to create bigger partition, use parted (or one of its frontend). The limitation applies to fdisk, cfdisk and all the *fdisk family.
EDIT: in parted you have to change the disk partition type to something like gpt or you still won’t be able to create a partition bigger than 1.5TB.
Nonetheless, once the very large partition is created, I still haven’t found a way to format it, mount it and get all my terabytes. I’m still stuck with 1.5TB. Look at this:
server:~# parted /dev/mapper/mpath1
GNU Parted 1.7.1
Using /dev/mapper/mpath1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Disk /dev/mapper/mpath1: 6000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 6000GB 6000GB ext2
server:~# df -h |grep mpath1p1
/dev/mapper/mpath1p1 1.5T 5.1M 1.5T 1% /mnt/logs
I’m stuck with this. Any idea, dear lazyweb?
LVM! You can make the physical partitions as required by your partition table, but then use Logical Volume Management to join multiple partitions together into larger filesystems.
# fdisk -l /dev/sdc
Disk /dev/sdc: 3836.4 GB, 3836482682880 bytes
255 heads, 63 sectors/track, 466425 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
# parted /dev/sdc
(parted) mklabel bsd
(parted) mkpartfs p ext2 0 3658754
(parted) q
# mount /dev/sdc1 /sdc1
# df -h /sdc1/
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 3.5T 52K 3.3T 1% /sdc1
hope that’s help
By the way, where did 1.5TB come from? I have a machine with an msdos-style partition table (CentOS doesn’t support gpt), and it lets me create/use partitions up to 2.0TB, not 1.5.
Have you taken a look at LVM?
@augmentedfourth: yes I do know that LVM can be a workaround in this case (I don’t name it a solution because it adds overhead and maybe I don’t need in this case the benefits LVM can give me and I just have to pay the price for it).
Anyway, I really don’t know where this 1.5TB came from… in fact searching in google I cannot find anyone in my situation so something strange is happening but I don’t have a clue about why 1.5TB.
Dude, you’re doing something wrong…
jmilk@files:~# sudo fdisk -l
Disk /dev/sda: 1279.9 GB, 1279954780160 bytes
255 heads, 63 sectors/track, 155612 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 62 497983+ 83 Linux
/dev/sda2 63 184 979965 82 Linux swap / Solaris
/dev/sda3 185 1400 9767520 83 Linux
/dev/sda4 1401 155612 1238707890 8e Linux LVM
Disk /dev/sdb: 1999.9 GB, 1999978364928 bytes
255 heads, 63 sectors/track, 243150 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 243150 1953102343+ 8e Linux LVM
jmilk@files:~# sudo df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 9.4G 3.9G 5.5G 42% /
varrun 498M 308K 498M 1% /var/run
varlock 498M 4.0K 498M 1% /var/lock
udev 498M 96K 498M 1% /dev
devshm 498M 0 498M 0% /dev/shm
/dev/sda1 456M 27M 405M 7% /boot
/dev/mapper/vg_data-lv_data
3.0T 1.3T 1.7T 44% /data
2TB works in fdisk for sure.
Ok, I did a search about this topic and found this.
As of late I’ve been installing but FC9 and never had an issue of creating a partition size in excess of 5TB with it’s installer. I’m pretty sure they use parted in it’s tui-installer.
So at home I’m installing my new gentoo vps server and have a 3TB array and run into this weird fdisk issue myself.
After some poking around and reading this forum I figured I would inform you that fdisk’s limit is 2TB. It simply wouldn’t allow you to create anything larger and for that matter any more partitions on the same disk beyond that 2TB.
So I decided to use parted for the first time and successfully created a 2.9TB partition, formatted it xfs and reiserfs with zero issues and was able to mount them again with no issues.
doing a df -h and it read quite nicely 2.7TB
cheers gents…
[…] […]