Summary
When setting up AWS S3 Sources or Destinations, it's common to encounter issues during setup. This article addresses some of the most frequent problems and potential solutions.
Users have two main options to authenticate when setting up S3 Sources and Destinations. You can leverage 'Assume Role' (recommended) in which Cribl Workers essentially adopt an AWS role with permissions and policies attached. For this option, workers must be on AWS or have a way to assume a role. Alternatively, users may be given an access key and secret key combination to authenticate (also with restrictions and policies).
For some discussion about how to see an issue and more details, check out our blog. Below are the common errors that could appear along with possible causes and what to check in order to resolve them.
Common Errors
Error: "S3Error: S3 bucket <bucketname> error: Forbidden message: null"
Possible Causes:
- Incorrect or invalid access key / secret key
- Incorrect or invalid role access / policy
- Using a prefix on the permissions policy
Potential Resolutions:
- Confirm access key is correct and active
- Check trust and resource policies
- Check if a permissions boundary is in use
Error: "S3Error: S3 bucket <bucketname> error: NotFound message: null"
Possible Causes:
- Bucket does not exist
- Bucket not in the specified region
- Incorrect name
Potential Resolutions:
- Confirm bucket exists and is spelled correctly
- Confirm bucket is in the correct region and account
Error: "The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access" or
Error: "User <arn> is not authorized to perform: kms:Decrypt on resource <arn> ..."
Possible Causes:
- Bucket or data is being encrypted but policies do not allow decryption
- NOTE: very common with CloudTrail KMS encrypted buckets
Potential Resolutions:
- Check that KMS is referenced in the service config (eg: CloudTrail, S3, etc)
- Check the resource policy has appropriate KMS permissions (eg: "KMS:decrypt")
- Allow access to the KMS key from the key policy itself (navigate to KMS -> Key -> Permissions)
Error: "Access Denied / failed to close file" (S3 Destinations)
Possible Causes:
- Improper resource policy
- Using a permissions boundary
- Invalid permissions on Cribl Worker staging directory
Potential Resolutions:
- Confirm permissions on the Resource - need access to both the resource and resource/* for PutObject
- Check if permissions boundary is limiting write access to S3
- Check that the Cribl Workers' local $CRIBL_HOME/outputs/staging directory and subfolders allow the user running Cribl to write to this path
Error: "Missing Credentials in config"
Possible Causes:
- Access key entered in Stream config but not in use
- Incorrect or incomplete resource policies
Potential Resolutions:
- Remove and Access / Secret Key info in Stream config if using AssumeRole instead
- Verify IAM role and resource policies
- Check the External ID is correct if in use
Error: "Failed to close file, Access Denied"
Possible Causes:
- More restrictive policy in use somewhere else in AWS
- Permissions boundary in place
- Incorrect permissions on the Cribl Worker staging directory
Potential Resolutions:
- Verify IAM role, permissions boundary, and resource policies
- Check resource policy on the bucket itself (common issue with CloudTrail 'created' buckets)
- Check file permissions on the Worker staging directory; allow the user running Stream write permissions to it
Error: "Incorrect header check"
Possible Causes:
- File compression issues, including a non-supported compression type
Potential Resolutions:
- Test without compression to see if the issue resolved
- Check for proper filename extension or if using an unsupported compression type
- Stream uses content encoding headers to validate data is actually encrypted. Validate headers are not corrupted if compression continues to be an issue. Check locally on another machine by testing if you can uncompress a file first.
Error: "Connection timed out after 500000ms" or "503: Slow Down"
Possible Causes:
- Connectivity issues
- Running into S3 API limits
- Stream Destination having problems
Potential Resolutions:
- Check lower level connectivity, such as proxies, firewall rules, etc
- S3 has limits to the API: 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket. Consider adjusting your partitioning to reduce cardinality. Visit the S3 best practices video for more information.
- Confirm Stream Destinations are up and not experiencing problems receiving data