diff options
| author | Konstantin Lebedev <9497591+kmlebedev@users.noreply.github.com> | 2025-11-06 11:05:54 +0500 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2025-11-05 22:05:54 -0800 |
| commit | 084b377f8786e3a4d98e0763c3e83be104a9b65e (patch) | |
| tree | 6c7fed59d4a631d8c1f10cb2c81ad11014d902ea /weed/filer/entry.go | |
| parent | cc444b186849cc4e476d539dd2643058a8160534 (diff) | |
| download | seaweedfs-084b377f8786e3a4d98e0763c3e83be104a9b65e.tar.xz seaweedfs-084b377f8786e3a4d98e0763c3e83be104a9b65e.zip | |
do delete expired entries on s3 list request (#7426)
* do delete expired entries on s3 list request
https://github.com/seaweedfs/seaweedfs/issues/6837
* disable delete expires s3 entry in filer
* pass opt allowDeleteObjectsByTTL to all servers
* delete on get and head
* add lifecycle expiration s3 tests
* fix opt allowDeleteObjectsByTTL for server
* fix test lifecycle expiration
* fix IsExpired
* fix locationPrefix for updateEntriesTTL
* fix s3tests
* resolv coderabbitai
* GetS3ExpireTime on filer
* go mod
* clear TtlSeconds for volume
* move s3 delete expired entry to filer
* filer delete meta and data
* del unusing func removeExpiredObject
* test s3 put
* test s3 put multipart
* allowDeleteObjectsByTTL by default
* fix pipline tests
* rm dublicate SeaweedFSExpiresS3
* revert expiration tests
* fix updateTTL
* rm log
* resolv comment
* fix delete version object
* fix S3Versioning
* fix delete on FindEntry
* fix delete chunks
* fix sqlite not support concurrent writes/reads
* move deletion out of listing transaction; delete entries and empty folders
* Revert "fix sqlite not support concurrent writes/reads"
This reverts commit 5d5da14e0ed91c613fe5c0ed058f58bb04fba6f0.
* clearer handling on recursive empty directory deletion
* handle listing errors
* strut copying
* reuse code to delete empty folders
* use iterative approach with a queue to avoid recursive WithFilerClient calls
* stop a gRPC stream from the client-side callback is to return a specific error, e.g., io.EOF
* still issue UpdateEntry when the flag must be added
* errors join
* join path
* cleaner
* add context, sort directories by depth (deepest first) to avoid redundant checks
* batched operation, refactoring
* prevent deleting bucket
* constant
* reuse code
* more logging
* refactoring
* s3 TTL time
* Safety check
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
Diffstat (limited to 'weed/filer/entry.go')
| -rw-r--r-- | weed/filer/entry.go | 24 |
1 files changed, 24 insertions, 0 deletions
diff --git a/weed/filer/entry.go b/weed/filer/entry.go index 5bd1a3c56..4757d5c9e 100644 --- a/weed/filer/entry.go +++ b/weed/filer/entry.go @@ -1,6 +1,7 @@ package filer import ( + "github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants" "os" "time" @@ -143,3 +144,26 @@ func maxUint64(x, y uint64) uint64 { } return y } + +func (entry *Entry) IsExpireS3Enabled() (exist bool) { + if entry.Extended != nil { + _, exist = entry.Extended[s3_constants.SeaweedFSExpiresS3] + } + return exist +} + +func (entry *Entry) IsS3Versioning() (exist bool) { + if entry.Extended != nil { + _, exist = entry.Extended[s3_constants.ExtVersionIdKey] + } + return exist +} + +func (entry *Entry) GetS3ExpireTime() (expireTime time.Time) { + if entry.Mtime.IsZero() { + expireTime = entry.Crtime + } else { + expireTime = entry.Mtime + } + return expireTime.Add(time.Duration(entry.TtlSec) * time.Second) +} |
