AWS Storage Gateway Test

I recently wanted to take a quick look at the File Gateway. It is described as “Store files as objects in Amazon S3, with a local cache for low-latency access to your most recently used data.” I tried it on Virtualbox using the Vmware ESXi Image they offer.


  • Download VMware ESXi Image.
  • With Virtualbox Import OVA AWS-Appliance-2018-12-11-1544560738.ova
  • Adjust memory 16 -> 10. Try not to do this if possible but in my case I was short on memory on the host.
  • Change to bridged networking instead of NAT.
  • Add a SAS controller and thick provisioned a disk. I did type VDI and 8GB for my test.
  • Use the SAS disk attached to the Virtualbox VM as cache in the AWS Storage Gateway console.
  • Share files as NFS (SMB you will need MS-AD)

Some useful CLI commands

$ aws storagegateway list-gateways
    "Gateways": [
            "GatewayId": "sgw-<...>",
            "GatewayARN": "arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>",
            "GatewayType": "FILE_S3",
            "GatewayOperationalState": "ACTIVE",
            "GatewayName": "iq-st01"

$ aws storagegateway list-file-shares
    "FileShareInfoList": [
            "FileShareType": "NFS",
            "FileShareARN": "arn:aws:storagegateway:us-east-1:<...>:share/share-<...>",
            "FileShareId": "share-<...>",
            "FileShareStatus": "AVAILABLE",
            "GatewayARN": "arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>"

$ aws storagegateway list-local-disks --gateway-arn arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>
    "GatewayARN": "arn:aws:storagegateway:us-east-1:<...>:gateway/sgw-<...>",
    "Disks": [
            "DiskId": "pci-0000:00:16.0-sas-0x00060504030201a0-lun-0",
            "DiskPath": "/dev/sda",
            "DiskNode": "SCSI (0:0)",
            "DiskStatus": "present",
            "DiskSizeInBytes": 8589934592,
            "DiskAllocationType": "CACHE STORAGE"

Mount test

# mount -t nfs -o nolock,hard  /mnt/st01
# nfsstat -m
/mnt/st01 from
 Flags:	rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=

Linux Mount nfsv4.2

Just a quick test on using nfs v4.2. This test was on a Ubuntu 17.4 server as well as client.

# cat /etc/exports 
/DATA	*(ro,sync,no_root_squash,insecure)

# systemctl restart nfs-kernel-server

# more /proc/fs/nfsd/versions 
+2 +3 +4 +4.1 +4.2

# mount -t nfs -o minorversion=2 server1:/DATA /DATA
# nfsstat -m
/mnt/home from server1:/home
 Flags:	rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=,mountvers=3,mountport=41341,mountproto=udp,local_lock=none,addr=

/DATA from server1:/DATA
 Flags:	rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=

# rsync -a --progress ubuntu-17.04-desktop-amd64.iso /DATA/DATABANK/iso/
sending incremental file list
  1,609,039,872 100%  157.75MB/s    0:00:09 (xfr#1, to-chk=0/1)