AWS S3 buckets should be kept private by default. AWS explictly goes out of it’s way to make it difficult to make a non-private bucket these days.

In most web-based usages of s3 hosted assets, you will need to have some way of access those files. Instead of marking the individual files as public, it’s better to create presigned urls for them, with explict limited expiry times. This allows non-authenticated clients to access specific bucket items for a limited time, with just a url.

The key for this behaviour is the createPresignedRequest method in the SDK.

On the (authenticated) server side, creating a signed url for an s3 object, looks like this in php:

$client = \AWS::createClient('s3');

$cmd = $client->getCommand('GetObject', [
    'Bucket' => $bucket,
    'Key' => $key
]);

$request = $client->createPresignedRequest($cmd, '+20 minutes');

 // Get the actual presigned-url
$presignedUrl = (string) $request->getUri();

The $presignedUrl returns a signed url to access that path, valid for 20 minutes.

It’ll look something like: https://{{ bucket }}/{{ key }}?X-Amz-Content-Sha256=..&X-Amz-Algorithm=..&X-Amz-Credential=..&X-Amz-Date=..&X-Amz-SignedHeaders=host&X-Amz-Expires=..&X-Amz-Signature=..

Laravel Specifically

If you’re using Laravel (or Flysystem more generally) there’s a helper function temporaryUrl on the Storage facade do the the same:

$url = Storage::disk($disk)
    ->temporaryUrl(
        $key,
        Carbon::now()->addMinutes(20)
    );

Bonus Note: Check your path is valid!

Generating the presigned url doesn’t validate the file actually exists in the bucket on the path specified, it’s just signing a request to access that path. If the file isn’t there, the request will 404.