Skip to content

martinklepsch/s3-beam

Repository files navigation

s3-beamDependencies Status

Usage|Changes

s3-beamis a Clojure/Clojurescript library designed to help you upload files from the browser to S3 (CORS upload).s3-beamcan also upload files from the browser toDigitalOcean Spaces.

[org.martinklepsch/s3-beam"0.6.0- Alpha 5"];;latest release

Usage

To upload files directly to S3 you need to send special request parameters that are based on your AWS credentials, the file name, mime type, date etc. Since wedon't want to store our credentials in the clientthese parameters need to be generated on the server side. For this reason this library consists of two parts:

  1. A pluggable route that will send back the required parameters for a given file-name & mime-type
  2. A client-side core.async pipeline setup that will retrieve the special parameters for a given File object, upload it to S3 and report back to you

1. Enable CORS on your S3 bucket

Please followAmazon's official documentation.

For DigitalOcean Spaces, please followDigitalOceans official documentation.

2. Plug-in the route to sign uploads

(nsyour.server
(:require[s3-beam.handler:ass3b]
[compojure.core:refer[GETdefroutes]]
[compojure.route:refer[resources]]))

(defbucket"your-bucket")
(defaws-zone"eu-west-1")
(defaccess-key"your-aws-access-key")
(defsecret-key"your-aws-secret-key")

(defroutesroutes
(resources"/")
(GET"/sign"{params:params} (s3b/s3-signbucket aws-zone access-key secret-key)))

If you want to use a route different than/sign,define it in the handler,(GET "/my-cool-route"...),and then pass it in the options map tos3-pipein the frontend.

If you are serving your S3 bucket from DigitalOcean Spaces, with CloudFront, or another CDN/proxy, you can pass upload-urlas a fifth parameter tos3-sign,so that the ClojureScript client is directed to upload through this bucket. You still need to pass the bucket name, as the policy that is created and signed is based on the bucket name.

3. Integrate the upload pipeline into your frontend

In your frontend code you can now uses3-beam.client/s3-pipe. s3-pipe's argument is a channel where completed uploads will be reported. The function returns a channel where you can put File objects of a file map that should get uploaded. It can also take an extra options map with the previously mentioned:server-urllike so:

(s3/s3-pipe uploaded {:server-url "/my-cool-route" }); assuming s3-beam.client is NS aliased as s3

The full options map spec is:

  • :server-urlthe signing server url, defaults to "/sign"
  • :response-parsera function to process the signing response from the signing server into EDN defaults to read-string.
  • :key-fna function used to generate the object key for the uploaded file on S3 defaults to nil, which means it will use the passed filename as the object key.
  • :headers-fna function used to create the headers for the GET request to the signing server. The returned headers should be a Clojure map of header name Strings to corresponding header value Strings.
  • :progress-events?If set to true, it will push progress events to the channel during the transfer, false per default.

If you choose to place a file map instead of aFileobject, you file map should follow:

  • :fileAFileobject
  • :identifier(optional) A variable used to uniquely identify this file upload. This will be included in the response channel.
  • :key(optional) The file-name parameter that is sent to the signing server. If a:keykey exists in the input-map it will be used instead of the key-fn as an object-key.
  • :metadata(optional) Metadata for the object. SeeAmazon's API docs for full details on which keys are supported. Keys and values can be strings or keywords. N.B.Keys not on that list will not be accepted. If you want to set arbitrary metadata, it needs to be prefixed withx-amz-meta-*.

An example using it within an Om component:

(nsyour.client
(:require[s3-beam.client:ass3]
...))

(defcomponentupload-form[app-state owner]
(init-state[_]
(let[uploaded (chan20)]
{:dropped-queue(chan20)
:upload-queue(s3/s3-pipeuploaded)
:uploadeduploaded
:uploads[]}))
(did-mount[_]
(listen-file-dropjs/document (om/get-stateowner:dropped-queue))
(go(whiletrue
(let[{:keys[dropped-queue upload-queue uploaded uploads]} (om/get-stateowner)]
(let[[v ch] (alts![dropped-queue uploaded])]
(cond
(=ch dropped-queue) (put!upload-queue v)
(=ch uploaded) (om/set-state!owner:uploads(conjuploads v))))))))
(render-state[this state]
;....
)

Return values

The spec for the returned map (in the example above the returned map isv):

  • :type:success
  • :fileTheFileobject from the uploaded file
  • :responseThe upload response from S3 as a map with:
  • :locationThe S3 URL of the uploaded file
  • :bucketThe S3 bucket where the file is located
  • :keyThe S3 key for the file
  • :etagThe etag for the file
  • :xhrTheXhrIoobject used to POST to S3
  • :identifierA value used to uniquely identify the uploaded file

Or, if an error occurs during upload processing, an error-map will be placed on the response channel:

  • :type:error
  • :identifierA variable used to uniquely identify this file upload. This will be included in the response channel.
  • :error-codeThe error code from the XHR
  • :error-messageThe debug message from the error code
  • :http-error-codeThe HTTP error code

If:progress-events?are set totrue,it will also forward those events from XhrIo:

  • :type:progress
  • :fileTheFileobject from the uploaded file
  • :bytes-sentBytes uploaded
  • :bytes-totalTotal file size in bytes
  • :xhrTheXhrIoobject used to POST to S3
  • :identifierA value used to uniquely identify the uploaded file

Changes

0.6.0- Alpha 5

  • Fix compilation issues with shadow-cljs (#47)
  • Upgrade dependencies (#48)

0.6.0- Alpha 4

  • Add support for DigitalOcean Spaces (#44)

0.6.0- Alpha 3

  • Add support for progress events (#40)

0.6.0- Alpha 1

  • Add support for assigning metadata to files when uploading them. See the file-map spec above for more details.#37
  • Tweak keys and parameters for communication between the client and server parts of the library. This is backwards and forwards compatible between clients and servers running 0.5.2 and 0.6.0- Alpha 1.

0.5.2

  • Allow the user to upload to S3 through a custom URL as an extra parameter tosign-upload
  • Support bucket names with a '.' in them
  • Add asserts that arguments are provided

0.5.1

  • Allow the upload-queue to be passed an input-map instead of a file. This input-map follows the spec:

    • :fileAFileobject
    • :identifier(optional) A variable used to uniquely identify this file upload. This will be included in the response channel.
    • :key(optional) The file-name parameter that is sent to the signing server. If a:keykey exists in the input-map it will be used instead of the key-fn as an object-key.
  • Introduce error handling. When an error has been thrown while uploading a file to S3 an error-map will be put onto the channel. The error-map follows the spec:

    • :identifierA variable used to uniquely identify this file upload. This will be included in the response channel.
    • :error-codeThe error code from the XHR
    • :error-messageThe debug message from the error code
    • :http-error-codeThe HTTP error code
  • New options are available in the options map:

    • :response-parsera function to process the signing response from the signing server into EDN defaults to read-string.
    • :key-fna function used to generate the object key for the uploaded file on S3 defaults to nil, which means it will use the passed filename as the object key.
    • :headers-fna function used to create the headers for the GET request to the signing server.
  • Places a map into the upload-channel with:

    • :fileTheFileobject from the uploaded file
    • :responseThe upload response from S3 as a map with:
    • :locationThe S3 URL of the uploaded file
    • :bucketThe S3 bucket where the file is located
    • :keyThe S3 key for the file
    • :etagThe etag for the file
    • :xhrTheXhrIoobject used to POST to S3
    • :identifierA value used to uniquely identify the uploaded file

0.4.0

  • Support custom ACLs. Thesign-uploadfunction that can be used to implement custom signing routes now supports an additional:aclkey to upload assets with a different ACL thanpublic-read.

    (sign-upload {:file-name "xyz.html":mime-type "text/html" }
    {:bucket bucket
    :aws-zone aws-zone
    :aws-access-key access-key
    :aws-secret-key secret-key
    :acl "authenticated-read" })
    
  • Changes the arity ofs3-beam.handler/policyfunction.

0.3.1

  • Correctly look up endpoints given a zone parameter (#10)

0.3.0

  • Allow customization of server-side endpoint (1cb9b27)

    (s3/s3-pipe uploaded {:server-url "/my-cool-route" })
    

0.2.0

  • Allow passing ofaws-zoneparameter tos3-signhandler function (b880736)

Contributing

Pull requests and issues are welcome. There are a few things I'd like to improve:

  • Testing:currently there are no tests
  • Error handling:what happens when the request fails?

Maintainers

Martin Klepsch Daniel Compton

License

Copyright © 2014 Martin Klepsch

Distributed under the Eclipse Public License either version 1.0 or (at your option) any later version.