Increasing disk space on the fly inside AWS is a joy to do (kind of). Especially compared to the challenges with bare-metal or on premise systems.
When working with disks, it is a good idea to have some idea how they work. Normally you would have a set of disks defined in you /dev
system (for devices) and you should be able to see these added externally.
If you have an EC2 instance with 2 EBS volumes mounted to it, they are probably something like this. Another thing to note is that because we are using t3 (Nitro) instances, EBS volumes are mounted with different names (oh), but that should not matter. But externally they are not shown as the nvme
mounts.
This is normally reflected into your EC2 instance provided that you have them mounted (you have attached them, right?!).
df
is a utility to “Show information about the file system on which each FILE resides, or all file systems by default”. You should see something like this if both disks are attached/mounted.
lsblk
is a utility to list details about block devices (disks). This will probably look something like this.
You can either increase the size through the console of through some other means, however, it is relatively straightforward (please snapshot it first, I don’t want the blame if you break something).
After this you should be able to see the extra size reflected in lsblk
. In this example we have increased the disk size to 100GB.
But the actual size available in df
has not changed:
You have increased disk, however, it does not immediately increase inside df
. You will need to grow/expand your partition to be able to use the new space. To do this you will need to understand what kind of block device it is, but generally in linux you are using an ext disk.
You can grow the partition with growpart
:
Then, you can expand it with resize2fs
which completes the job:
Now it is reflected in df
and you can use all your new space!
Connect on LinkedIn. Follow me on Twitter. Grab the RSS Feed