diff options
| author | Konstantin Lebedev <9497591+kmlebedev@users.noreply.github.com> | 2025-11-06 11:05:54 +0500 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2025-11-05 22:05:54 -0800 |
| commit | 084b377f8786e3a4d98e0763c3e83be104a9b65e (patch) | |
| tree | 6c7fed59d4a631d8c1f10cb2c81ad11014d902ea /weed/s3api/s3api_object_handlers_put.go | |
| parent | cc444b186849cc4e476d539dd2643058a8160534 (diff) | |
| download | seaweedfs-084b377f8786e3a4d98e0763c3e83be104a9b65e.tar.xz seaweedfs-084b377f8786e3a4d98e0763c3e83be104a9b65e.zip | |
do delete expired entries on s3 list request (#7426)
* do delete expired entries on s3 list request
https://github.com/seaweedfs/seaweedfs/issues/6837
* disable delete expires s3 entry in filer
* pass opt allowDeleteObjectsByTTL to all servers
* delete on get and head
* add lifecycle expiration s3 tests
* fix opt allowDeleteObjectsByTTL for server
* fix test lifecycle expiration
* fix IsExpired
* fix locationPrefix for updateEntriesTTL
* fix s3tests
* resolv coderabbitai
* GetS3ExpireTime on filer
* go mod
* clear TtlSeconds for volume
* move s3 delete expired entry to filer
* filer delete meta and data
* del unusing func removeExpiredObject
* test s3 put
* test s3 put multipart
* allowDeleteObjectsByTTL by default
* fix pipline tests
* rm dublicate SeaweedFSExpiresS3
* revert expiration tests
* fix updateTTL
* rm log
* resolv comment
* fix delete version object
* fix S3Versioning
* fix delete on FindEntry
* fix delete chunks
* fix sqlite not support concurrent writes/reads
* move deletion out of listing transaction; delete entries and empty folders
* Revert "fix sqlite not support concurrent writes/reads"
This reverts commit 5d5da14e0ed91c613fe5c0ed058f58bb04fba6f0.
* clearer handling on recursive empty directory deletion
* handle listing errors
* strut copying
* reuse code to delete empty folders
* use iterative approach with a queue to avoid recursive WithFilerClient calls
* stop a gRPC stream from the client-side callback is to return a specific error, e.g., io.EOF
* still issue UpdateEntry when the flag must be added
* errors join
* join path
* cleaner
* add context, sort directories by depth (deepest first) to avoid redundant checks
* batched operation, refactoring
* prevent deleting bucket
* constant
* reuse code
* more logging
* refactoring
* s3 TTL time
* Safety check
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
Diffstat (limited to 'weed/s3api/s3api_object_handlers_put.go')
| -rw-r--r-- | weed/s3api/s3api_object_handlers_put.go | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/weed/s3api/s3api_object_handlers_put.go b/weed/s3api/s3api_object_handlers_put.go index 148df89f6..0d07c548e 100644 --- a/weed/s3api/s3api_object_handlers_put.go +++ b/weed/s3api/s3api_object_handlers_put.go @@ -333,7 +333,8 @@ func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader proxyReq.Header.Set(s3_constants.SeaweedFSSSES3Key, base64.StdEncoding.EncodeToString(sseS3Metadata)) glog.V(3).Infof("putToFiler: storing SSE-S3 metadata for object %s with keyID %s", uploadUrl, sseS3Key.KeyID) } - + // Set TTL-based S3 expiry (modification time) + proxyReq.Header.Set(s3_constants.SeaweedFSExpiresS3, "true") // ensure that the Authorization header is overriding any previous // Authorization header which might be already present in proxyReq s3a.maybeAddFilerJwtAuthorization(proxyReq, true) |
