|
Object Lock (#7734)
* fix: admin UI bucket delete now properly deletes collection and checks Object Lock
Fixes #7711
The admin UI's DeleteS3Bucket function was missing two critical behaviors:
1. It did not delete the collection from the master (unlike s3.bucket.delete
shell command), leaving orphaned volume data that caused fs.verify errors.
2. It did not check for Object Lock protections before deletion, potentially
allowing deletion of buckets with locked objects.
Changes:
- Add shared Object Lock checking utilities to object_lock_utils.go:
- EntryHasActiveLock: standalone function to check if an entry has active lock
- HasObjectsWithActiveLocks: shared function to scan bucket for locked objects
- Refactor S3 API entryHasActiveLock to use shared EntryHasActiveLock function
- Update admin UI DeleteS3Bucket to:
- Check Object Lock using shared HasObjectsWithActiveLocks utility
- Delete the collection before deleting filer entries (matching s3.bucket.delete)
* refactor: S3 API uses shared Object Lock utilities
Removes 114 lines of duplicated code from s3api_bucket_handlers.go by
having hasObjectsWithActiveLocks delegate to the shared HasObjectsWithActiveLocks
function in object_lock_utils.go.
Now both S3 API and Admin UI use the same shared utilities:
- EntryHasActiveLock
- HasObjectsWithActiveLocks
- recursivelyCheckLocksWithClient
- checkVersionsForLocksWithClient
* feat: s3.bucket.delete shell command now checks Object Lock
Add Object Lock protection to the s3.bucket.delete shell command.
If the bucket has Object Lock enabled and contains objects with active
retention or legal hold, deletion is prevented.
Also refactors Object Lock checking utilities into a new s3_objectlock
package to avoid import cycles between shell, s3api, and admin packages.
All three components now share the same logic:
- S3 API (DeleteBucketHandler)
- Admin UI (DeleteS3Bucket)
- Shell command (s3.bucket.delete)
* refactor: unified Object Lock checking and consistent deletion parameters
1. Add CheckBucketForLockedObjects() - a unified function that combines:
- Bucket entry lookup
- Object Lock enabled check
- Scan for locked objects
2. All three components now use this single function:
- S3 API (via s3api.CheckBucketForLockedObjects)
- Admin UI (via s3api.CheckBucketForLockedObjects)
- Shell command (via s3_objectlock.CheckBucketForLockedObjects)
3. Aligned deletion parameters across all components:
- isDeleteData: false (collection already deleted separately)
- isRecursive: true
- ignoreRecursiveError: true
* fix: properly handle non-EOF errors in Recv() loops
The Recv() loops in recursivelyCheckLocksWithClient and
checkVersionsForLocksWithClient were breaking on any error, which
could hide real stream errors and incorrectly report 'no locks found'.
Now:
- io.EOF: break loop (normal end of stream)
- any other error: return it so caller knows the stream failed
* fix: address PR review comments
1. Add path traversal protection - validate entry names before building
subdirectory paths. Skip entries with empty names, '.', '..', or
containing path separators.
2. Use exact match for .versions folder instead of HasSuffix() to avoid
mismatching unrelated directories like 'foo.versions'.
3. Replace path.Join with simple string concatenation since we now
validate entry names.
* refactor: extract paginateEntries helper to reduce duplication
The recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient
functions shared significant structural similarity. Extracted a generic
paginateEntries helper that:
- Handles pagination logic (lastFileName tracking, Limit)
- Handles stream receiving with proper EOF vs error handling
- Validates entry names (path traversal protection)
- Calls a processEntry callback for business logic
This centralizes pagination logic and makes the code more maintainable.
* feat: add context propagation for timeout and cancellation support
All Object Lock checking functions now accept context.Context parameter:
- paginateEntries(ctx, client, dir, processEntry)
- recursivelyCheckLocksWithClient(ctx, client, dir, hasLocks, currentTime)
- checkVersionsForLocksWithClient(ctx, client, versionsDir, hasLocks, currentTime)
- HasObjectsWithActiveLocks(ctx, client, bucketPath)
- CheckBucketForLockedObjects(ctx, client, bucketsPath, bucketName)
This enables:
- Timeout support for large bucket scans
- Cancellation propagation from HTTP requests
- The S3 API handler now uses r.Context() for proper request lifecycle
* fix: address PR review comments
1. Add DefaultBucketsPath constant in admin_server.go instead of
hardcoding "/buckets" in multiple places.
2. Add defensive normalization in EntryHasActiveLock:
- TrimSpace to handle whitespace around values
- ToUpper for case-insensitive comparison of legal hold and
retention mode values
- TrimSpace on retention date before parsing
* fix: use ctx variable consistently instead of context.Background()
In both DeleteS3Bucket and command_s3_bucket_delete, use the ctx
variable defined at the start of the function for all gRPC calls
instead of creating new context.Background() instances.
|
|
* fix GetObjectLockConfigurationHandler
* cache and use bucket object lock config
* subscribe to bucket configuration changes
* increase bucket config cache TTL
* refactor
* Update weed/s3api/s3api_server.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* avoid duplidated work
* rename variable
* Update s3api_object_handlers_put.go
* fix routing
* admin ui and api handler are consistent now
* use fields instead of xml
* fix test
* address comments
* Update weed/s3api/s3api_object_handlers_put.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update test/s3/retention/s3_retention_test.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update weed/s3api/object_lock_utils.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* change error style
* errorf
* read entry once
* add s3 tests for object lock and retention
* use marker
* install s3 tests
* Update s3tests.yml
* Update s3tests.yml
* Update s3tests.conf
* Update s3tests.conf
* address test errors
* address test errors
With these fixes, the s3-tests should now:
✅ Return InvalidBucketState (409 Conflict) for object lock operations on invalid buckets
✅ Return MalformedXML for invalid retention configurations
✅ Include VersionId in response headers when available
✅ Return proper HTTP status codes (403 Forbidden for retention mode changes)
✅ Handle all object lock validation errors consistently
* fixes
With these comprehensive fixes, the s3-tests should now:
✅ Return InvalidBucketState (409 Conflict) for object lock operations on invalid buckets
✅ Return InvalidRetentionPeriod for invalid retention periods
✅ Return MalformedXML for malformed retention configurations
✅ Include VersionId in response headers when available
✅ Return proper HTTP status codes for all error conditions
✅ Handle all object lock validation errors consistently
The workflow should now pass significantly more object lock tests, bringing SeaweedFS's S3 object lock implementation much closer to AWS S3 compatibility standards.
* fixes
With these final fixes, the s3-tests should now:
✅ Return MalformedXML for ObjectLockEnabled: 'Disabled'
✅ Return MalformedXML when both Days and Years are specified in retention configuration
✅ Return InvalidBucketState (409 Conflict) when trying to suspend versioning on buckets with object lock enabled
✅ Handle all object lock validation errors consistently with proper error codes
* constants and fixes
✅ Return InvalidRetentionPeriod for invalid retention values (0 days, negative years)
✅ Return ObjectLockConfigurationNotFoundError when object lock configuration doesn't exist
✅ Handle all object lock validation errors consistently with proper error codes
* fixes
✅ Return MalformedXML when both Days and Years are specified in the same retention configuration
✅ Return 400 (Bad Request) with InvalidRequest when object lock operations are attempted on buckets without object lock enabled
✅ Handle all object lock validation errors consistently with proper error codes
* fixes
✅ Return 409 (Conflict) with InvalidBucketState for bucket-level object lock configuration operations on buckets without object lock enabled
✅ Allow increasing retention periods and overriding retention with same/later dates
✅ Only block decreasing retention periods without proper bypass permissions
✅ Handle all object lock validation errors consistently with proper error codes
* fixes
✅ Include VersionId in multipart upload completion responses when versioning is enabled
✅ Block retention mode changes (GOVERNANCE ↔ COMPLIANCE) without bypass permissions
✅ Handle all object lock validation errors consistently with proper error codes
✅ Pass the remaining object lock tests
* fix tests
* fixes
* pass tests
* fix tests
* fixes
* add error mapping
* Update s3tests.conf
* fix test_object_lock_put_obj_lock_invalid_days
* fixes
* fix many issues
* fix test_object_lock_delete_multipart_object_with_legal_hold_on
* fix tests
* refactor
* fix test_object_lock_delete_object_with_retention_and_marker
* fix tests
* fix tests
* fix tests
* fix test itself
* fix tests
* fix test
* Update weed/s3api/s3api_object_retention.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* reduce logs
* address comments
* refactor
* rename
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
|
* fix GetObjectLockConfigurationHandler
* cache and use bucket object lock config
* subscribe to bucket configuration changes
* increase bucket config cache TTL
* refactor
* Update weed/s3api/s3api_server.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* avoid duplidated work
* rename variable
* Update s3api_object_handlers_put.go
* fix routing
* admin ui and api handler are consistent now
* use fields instead of xml
* fix test
* address comments
* Update weed/s3api/s3api_object_handlers_put.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update test/s3/retention/s3_retention_test.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update weed/s3api/object_lock_utils.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* change error style
* errorf
* read entry once
* add s3 tests for object lock and retention
* use marker
* install s3 tests
* Update s3tests.yml
* Update s3tests.yml
* Update s3tests.conf
* Update s3tests.conf
* address test errors
* address test errors
With these fixes, the s3-tests should now:
✅ Return InvalidBucketState (409 Conflict) for object lock operations on invalid buckets
✅ Return MalformedXML for invalid retention configurations
✅ Include VersionId in response headers when available
✅ Return proper HTTP status codes (403 Forbidden for retention mode changes)
✅ Handle all object lock validation errors consistently
* fixes
With these comprehensive fixes, the s3-tests should now:
✅ Return InvalidBucketState (409 Conflict) for object lock operations on invalid buckets
✅ Return InvalidRetentionPeriod for invalid retention periods
✅ Return MalformedXML for malformed retention configurations
✅ Include VersionId in response headers when available
✅ Return proper HTTP status codes for all error conditions
✅ Handle all object lock validation errors consistently
The workflow should now pass significantly more object lock tests, bringing SeaweedFS's S3 object lock implementation much closer to AWS S3 compatibility standards.
* fixes
With these final fixes, the s3-tests should now:
✅ Return MalformedXML for ObjectLockEnabled: 'Disabled'
✅ Return MalformedXML when both Days and Years are specified in retention configuration
✅ Return InvalidBucketState (409 Conflict) when trying to suspend versioning on buckets with object lock enabled
✅ Handle all object lock validation errors consistently with proper error codes
* constants and fixes
✅ Return InvalidRetentionPeriod for invalid retention values (0 days, negative years)
✅ Return ObjectLockConfigurationNotFoundError when object lock configuration doesn't exist
✅ Handle all object lock validation errors consistently with proper error codes
* fixes
✅ Return MalformedXML when both Days and Years are specified in the same retention configuration
✅ Return 400 (Bad Request) with InvalidRequest when object lock operations are attempted on buckets without object lock enabled
✅ Handle all object lock validation errors consistently with proper error codes
* fixes
✅ Return 409 (Conflict) with InvalidBucketState for bucket-level object lock configuration operations on buckets without object lock enabled
✅ Allow increasing retention periods and overriding retention with same/later dates
✅ Only block decreasing retention periods without proper bypass permissions
✅ Handle all object lock validation errors consistently with proper error codes
* fixes
✅ Include VersionId in multipart upload completion responses when versioning is enabled
✅ Block retention mode changes (GOVERNANCE ↔ COMPLIANCE) without bypass permissions
✅ Handle all object lock validation errors consistently with proper error codes
✅ Pass the remaining object lock tests
* fix tests
* fixes
* pass tests
* fix tests
* fixes
* add error mapping
* Update s3tests.conf
* fix test_object_lock_put_obj_lock_invalid_days
* fixes
* fix many issues
* fix test_object_lock_delete_multipart_object_with_legal_hold_on
* fix tests
* refactor
* fix test_object_lock_delete_object_with_retention_and_marker
* fix tests
* fix tests
* fix tests
* fix test itself
* fix tests
* fix test
* Update weed/s3api/s3api_object_retention.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* reduce logs
* address comments
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|