Implement Ceph s3 Object storage in Drupal 7.x

Vineet Kumar
3 min readJul 13, 2020

Lot of Article available for bucket URL like https://<bucket_name>.<host_name>/ , but this article for implement bucket URL like https://<host_name>/<bucket_name>/ .
To given some following rules and instructions to implement Ceph S3 Object storage in Drupal 7.x

  1. Dependencies and Other Requirements
    - S3fs 2.x — https://www.drupal.org/project/s3fs
    - Libraries API 2.x
    - https://drupal.org/project/libraries
    - AWS SDK for PHP — http://aws.amazon.com/sdk-for-php
    - PHP 5.3.3+ is required. The AWS SDK will not work on earlier versions.
    - PHP must be configured with “allow_url_fopen = On” in your php.ini file. Otherwise, PHP will be unable to open files that are in your S3 bucket.
    S3 File System uses the Libraries module to access the AWS SDK for PHP 2.x library.Please note that AWS SDK for PHP 3.x is not compatible with S3 File System. You must install the 2.x version of the SDK.
  2. Installation
    a
    . Install Libraries version 2.x from http://drupal.org/project/libraries, in sites/all/modules/contributed (Already Installed then skipped)
    b. Download s3fs module https://ftp.drupal.org/files/projects/s3fs-7.x-2.13.zip and Install the s3fs (The module must be downloaded and extracted to sites/all/modules/contributed) module using Drush. The command to install the module is :
    drush en s3fs
    c. Install the AWS SDK for PHP: If server is connected to Internet , you can install the AWS SDK with this command:
    drush make --no-core sites/all/modules/contributed/s3fs/s3fs.make OR
    drush make --no-core sites/all/modules/s3fs/s3fs.make
    OTHERWISE OTHER WAY TO INSTALL AWS SDK IS:
    Download AWS SDK from https://github.com/aws/aws-sdk-php/releases/download/2.7.25/aws.zip. Extract that zip file into sites/all/libraries/awssdk2 folder such that the path to aws-autoloader.php is sites/all/libraries/awssdk2/aws-autoloader.php
  3. For Modify bucket Structure open AWD SDK file sites/all/libraries/awssdk2/Aws/S3/BucketStyleListener.php tand go to line no 67 and change the code like this:
    # $request->setHost($bucket . '.' . $request->getHost()); //Comment this line and add folloling Line
    $request->setHost($request->getHost(). ‘/’ . $bucket);
    save and exit the file anf run “drush cc all” to clear all cache.
  4. In settings.php file (sites/default/settings.php), add following lines in the bottom of the page:
    $conf[‘awssdk2_access_key’] = ‘XXXXXXXXXXXXXXXXXXXXXXXX’; //Key created By Ceph Admin or Storage Admin
    $conf[‘awssdk2_secret_key’] = ‘XXXXXXXXXXXXXXXXXXXXXXXX’; // Key created By Ceph Admin or Storage Admin
    $conf[‘s3fs_bucket’] = ‘bucket_name/’; //Bucket Name created By Ceph Admin or Storage Admin
    $conf[‘s3fs_use_customhost’] = TRUE;
    $conf[‘s3fs_hostname’] = ‘https://abc.test.com'; // Host created By Ceph Admin or Storage Admin
    $conf[‘s3fs_use_cname’] = TRUE;
    $conf[‘s3fs_domain’] = ‘abc.test.com’; // Domain name is same as host name but without https
    $conf[‘s3fs_use_path_style_endpoint’] = TRUE;
    $conf[‘s3fs_public_folder’] = ‘mybucket’; // It will create a Directory under the Bucket Root
    $conf[‘s3fs_use_s3_for_public’] = TRUE ; //enable the “Use S3 for public:// files”
    $conf[‘s3fs_no_rewrite_cssjs’] = TRUE ; //Not rewrite css and js
  5. Configure Portal to Use s3fs
    A.
    Visit the admin/config/media/file-system page and set the “Default download method” to “Amazon Simple Storage Service” AND Add a field of type File, Image, etc. and set the “Upload destination” to “Amazon Simple Storage Service” in the “Field Settings” tab.
    B. On the s3fs configuration page (admin/config/media/s3fs) you can enable the “Use S3 for public:// files” and/or “Use S3 for private:// files” options to make s3fs take over the job of the public and/or private file sys-tems. This will cause the site to store newly uploaded/generated files from the public/private file system in S3 instead of the local file system. (Already Enable in setting.php)
    C. Now save Configuration.
  6. Now ALL set up . You are strongly encouraged to use the drush command “drush s3fs-copy-local” to do this, as it will copy all the files into the correct subfolders in your bucket, according to your s3fs con-figuration, and will write them to the metadata cache so that not send the duplicate copy .
  7. If you don’t have drush, you can use the buttons provided on the S3FS Actions page (admin/config/media/s3fs/actions), though the copy operation may fail if you have a lot of files, or very large files. The Drush command will cleanly handle any combination of files.
  8. Enjoy!!!

--

--