aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
7 daystest: add integration test for versioned object listing path fixorigin/adjust-fsck-cutoff-defaultchrislu1-0/+182
Add integration test that validates the fix for GitHub discussion #7573. The test verifies that: - Entry names use path.Base() to get base filename only - Path doubling bug is prevented when listing versioned objects - Logical entries are created correctly with proper attributes - .versions folder paths are handled correctly This test documents the Velero/Kopia compatibility fix and prevents regression of the path doubling bug.
8 daysUpdate command_volume_fsck.gochrislu1-1/+0
8 daysvolume.fsck: add help text explaining cutoffTimeAgo parameterchrislu1-0/+7
8 daysUpdate command_volume_fsck.gochrislu1-1/+1
8 daysvolume.fsck: increase default cutoffTimeAgo from 5 minutes to 5 hourschrislu1-1/+1
This change makes the fsck check more conservative by only considering chunks older than 5 hours as potential orphans. A 5 minute window was too aggressive and could incorrectly flag recently written chunks, especially in busy systems or during backup operations. Addresses #7649
8 daysImplement a `weed shell` command to return a status overview of the cluster. ↵Lisandro Pin1-0/+214
(#7704) Detailed file information will be implemented in a follow-up MR. Note also that masters are currently not reporting back EC shard sizes correctly, via `master_pb.VolumeEcShardInformationMessage.shard_sizes`. F.ex: ``` > cluster.status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s)s, 1 disk(s) on 1 rack(s) volumes: total: 3 volumes on 1 collections max size: 31457280000 bytes regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%) EC: 1 EC volumes on 14 shards (14.00 shards/volume) storage: total: 186024424 bytes regular volumes: 186024424 bytes EC volumes: 0 bytes raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files ```
8 daysshell: add -owner flag to s3.bucket.create command (#7728)Chris Lu9-177/+723
* shell: add -owner flag to s3.bucket.create command This fixes an issue where buckets created via weed shell cannot be accessed by non-admin S3 users because the bucket has no owner set. When using S3 IAM authentication, non-admin users can only access buckets they own. Buckets created via lazy S3 creation automatically have their owner set from the request context, but buckets created via weed shell had no owner, making them inaccessible to non-admin users. The new -owner flag allows setting the bucket owner identity (s3-identity-id) at creation time: s3.bucket.create -name my-bucket -owner my-identity-name Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599 * shell: add s3.bucket.owner command to view/change bucket ownership This command allows viewing and changing the owner of an S3 bucket, making it easier to manage bucket access for IAM users. Usage: # View the current owner of a bucket s3.bucket.owner -name my-bucket # Set or change the owner of a bucket s3.bucket.owner -name my-bucket -set -owner new-identity # Remove the owner (make bucket admin-only) s3.bucket.owner -name my-bucket -set -owner "" * shell: show bucket owner in s3.bucket.list output Display the bucket owner (s3-identity-id) when listing buckets, making it easier to see which identity owns each bucket. Example output: my-bucket size:1024 chunk:5 owner:my-identity * admin: add bucket owner support to admin UI - Add Owner field to S3Bucket struct for displaying bucket ownership - Add Owner field to CreateBucketRequest for setting owner at creation - Add UpdateBucketOwner API endpoint (PUT /api/s3/buckets/:bucket/owner) - Add SetBucketOwner function for updating bucket ownership - Update GetS3Buckets to populate owner from s3-identity-id extended attribute - Update CreateS3BucketWithObjectLock to set owner when creating bucket This allows the admin UI to display bucket owners and supports creating/ editing bucket ownership, which is essential for S3 IAM authentication where non-admin users can only access buckets they own. * admin: show bucket owner in buckets list and create form - Add Owner column to buckets table to display bucket ownership - Add Owner field to create bucket form for setting owner at creation - Show owner in bucket details modal - Update JavaScript to include owner when creating buckets This makes bucket ownership visible and configurable from the admin UI, which is essential for S3 IAM authentication where non-admin users can only access buckets they own. * admin: add bucket owner management with user dropdown - Add 'Manage Owner' button to bucket actions - Add modal with dropdown to select owner from existing users - Fetch users from /api/users endpoint to populate dropdown - Update create bucket form to use dropdown for owner selection - Allow setting owner to empty (no owner = admin-only access) This provides a user-friendly way to manage bucket ownership by selecting from existing S3 identities rather than manually typing identity names. * fix: use username instead of name for user dropdown The /api/users endpoint returns 'username' field, not 'name'. Fixed both the manage owner modal and create bucket form. * Update s3_buckets_templ.go * fix: address code review feedback for s3.bucket.create - Check if entry.Extended is nil before making a new map to prevent overwriting any previously set extended attributes - Use fmt.Fprintln(writer, ...) instead of println() for consistent output handling across the shell command framework * fix: improve help text and validate owner input - Add note that -owner value should match identity name in s3.json - Trim whitespace from owner and treat whitespace-only as empty * fix: address code review feedback for list and owner commands - s3.bucket.list: Use %q to escape owner value and prevent malformed tabular output from special characters (tabs/newlines/control chars) - s3.bucket.owner: Use neutral error message for lookup failures since they can occur for reasons other than missing bucket (e.g., permission) * fix: improve s3.bucket.owner CLI UX - Remove confusing -set flag that was required but not shown in examples - Add explicit -delete flag to remove owner (safer than empty string) - Presence of -owner now implies set operation (no extra flag needed) - Validate that -owner and -delete cannot be used together - Trim whitespace from owner value - Update help text with correct examples and add note about identity name - Clearer success messages for each operation * fix: address code review feedback for admin UI - GetBucketDetails: Extract and return owner from extended attributes - CSV export: Fix column indices after adding Owner column, add Owner to header - XSS prevention: Add escapeHtml() function to sanitize user data in innerHTML (bucket.name, bucket.owner, bucket.object_lock_mode, obj.key, obj.storage_class) * fix: address additional code review feedback - types.go: Add omitempty to Owner JSON tag, update comment - bucket_management.go: Trim and validate owner (max 256 chars) in CreateBucket - bucket_management.go: Use neutral error message in SetBucketOwner lookup * fix: improve owner field handling and error recovery bucket_management.go: - Use *string pointer for Owner to detect if field was explicitly provided - Return HTTP 400 if owner field is missing (use empty string to clear) - Trim and validate owner (max 256 chars) in UpdateBucketOwner s3_buckets.templ: - Re-enable owner select dropdown on fetch error - Reset dropdown to default 'No owner' option on error - Allow users to retry or continue without selecting an owner * fix: move modal instance variables to global scope Move deleteModalInstance, quotaModalInstance, ownerModalInstance, detailsModalInstance, and cachedUsers to global scope so they are accessible from both DOMContentLoaded handlers and global functions like deleteBucket(). This fixes the undefined variable issue. * refactor: improve modal handling and avoid global window properties - Initialize modal instances once on DOMContentLoaded and reuse with show() - Replace window.currentBucket* global properties with data attributes on forms - Remove modal dispose/recreate pattern and unnecessary cleanup code - Scope state to relevant DOM elements instead of global namespace * Update s3_buckets_templ.go * fix: define MaxOwnerNameLength constant and implement RFC 4180 CSV escaping bucket_management.go: - Add MaxOwnerNameLength constant (256) with documentation - Replace magic number 256 with constant in both validation checks s3_buckets.templ: - Add escapeCsvField() helper for RFC 4180 compliant CSV escaping - Properly handle commas, double quotes, and newlines in field values - Escape internal quotes by doubling them (")→("") * Update s3_buckets_templ.go * refactor: use direct gRPC client methods for consistency - command_s3_bucket_create.go: Use client.CreateEntry instead of filer_pb.CreateEntry - command_s3_bucket_owner.go: Use client.LookupDirectoryEntry instead of filer_pb.LookupEntry - command_s3_bucket_owner.go: Use client.UpdateEntry instead of filer_pb.UpdateEntry This aligns with the pattern used in weed/admin/dash/bucket_management.go
8 dayss3: allow -s3.config and -s3.iam.config to work together (#7727)Chris Lu1-8/+8
When both -s3.config and -s3.iam.config are configured, traditional credentials from -s3.config were failing with Access Denied because the authorization code always used IAM authorization when IAM integration was configured. The fix checks if the identity has legacy Actions (from -s3.config). If so, use the legacy canDo() authorization. Only use IAM authorization for JWT/STS identities that don't have legacy Actions. This allows both configuration options to coexist: - Traditional credentials use legacy authorization - JWT/STS credentials use IAM authorization Fixes #7720
8 dayss3: enable auth when IAM integration is configured (#7726)Chris Lu2-0/+158
When only IAM integration is configured (via -s3.iam.config) without traditional S3 identities, the isAuthEnabled flag was not being set, causing the Auth middleware to bypass all authentication checks. This fix ensures that when SetIAMIntegration is called with a non-nil integration, isAuthEnabled is set to true, properly enforcing authentication for all requests. Added negative authentication tests: - TestS3AuthenticationDenied: tests rejection of unauthenticated, invalid, and expired JWT requests - TestS3IAMOnlyModeRejectsAnonymous: tests that IAM-only mode properly rejects anonymous requests Fixes #7724
8 daysReduce memory allocations in hot paths (#7725)Chris Lu10-31/+298
* filer: reduce allocations in MatchStorageRule Optimize MatchStorageRule to avoid allocations in common cases: - Return singleton emptyPathConf when no rules match (zero allocations) - Return existing rule directly when only one rule matches (zero allocations) - Only allocate and merge when multiple rules match (rare case) Based on heap profile analysis showing 111MB allocated from 1.64M calls to this function during 180 seconds of operation. * filer: add fast path for getActualStore when no path-specific stores Add hasPathSpecificStore flag to FilerStoreWrapper to skip the MatchPrefix() call and []byte(path) conversion when no path-specific stores are configured (the common case). Based on heap profile analysis showing 1.39M calls to this function during 180 seconds of operation, each requiring a string-to-byte slice conversion for the MatchPrefix call. * filer/foundationdb: use sync.Pool for tuple allocation in genKey Use sync.Pool to reuse tuple.Tuple slices in genKey(), reducing allocation overhead for every FoundationDB operation. Based on heap profile analysis showing 102MB allocated from 1.79M calls to genKey() during 180 seconds of operation. The Pack() call still allocates internally, but this reduces the tuple slice allocation overhead by ~50%. * filer: use sync.Pool for protobuf Entry and FuseAttributes Add pooling for filer_pb.Entry and filer_pb.FuseAttributes in EncodeAttributesAndChunks and DecodeAttributesAndChunks to reduce allocations during filer store operations. Changes: - Add pbEntryPool with pre-allocated FuseAttributes - Add EntryAttributeToExistingPb for in-place attribute conversion - Update ToExistingProtoEntry to reuse existing Attributes when available Based on heap profile showing: - EncodeAttributesAndChunks: 69.5MB cumulative - DecodeAttributesAndChunks: 46.5MB cumulative - EntryAttributeToPb: 47.5MB flat allocations * log_buffer: use sync.Pool for LogEntry in readTs Add logEntryPool to reuse filer_pb.LogEntry objects in readTs(), which is called frequently during binary search in ReadFromBuffer. This function only needs the TsNs field from the unmarshaled entry, so pooling the LogEntry avoids repeated allocations. Based on heap profile showing readTs with 188MB cumulative allocations from timestamp lookups during log buffer reads. * pb: reduce gRPC metadata allocations in interceptor Optimize requestIDUnaryInterceptor and WithGrpcClient to reduce metadata allocations on every gRPC request: - Use AppendToOutgoingContext instead of NewOutgoingContext + New() This avoids creating a new map[string]string for single key-value pairs - Check FromIncomingContext return value before using metadata Based on heap profile showing metadata operations contributing 0.45GB (10.5%) of allocations, with requestIDUnaryInterceptor being the main source at 0.44GB cumulative. Expected reduction: ~0.2GB from avoiding map allocations per request. * filer/log_buffer: address code review feedback - Use proto.Reset() instead of manual field clearing in resetLogEntry for more idiomatic and comprehensive state clearing - Add resetPbEntry() call before pool return in error path for consistency with success path in DecodeAttributesAndChunks * log_buffer: reduce PreviousBufferCount from 32 to 4 Reduce the number of retained previous buffers from 32 to 4. Each buffer is 8MB, so this reduces the maximum retained memory from 256MB to 32MB for previous buffers. Most subscribers catch up quickly, so 4 buffers (32MB) should be sufficient while significantly reducing memory footprint. * filer/foundationdb: use defer for tuple pool cleanup in genKey Refactor genKey to use defer for returning the pooled tuple. This ensures the pooled object is always returned even if store.seaweedfsDir.Pack panics, making the code more robust. Also simplifies the code by removing the temporary variable. * filer: early-stop MatchStorageRule prescan after 2 matches Stop the prescan callback after finding 2 matches since we only need to know if there are 0, 1, or multiple matches. This avoids unnecessarily scanning the rest of the trie when many rules exist. * fix: address critical code review issues filer_conf.go: - Remove mutable singleton emptyPathConf that could corrupt shared state - Return fresh copy for no-match case and cloned copy for single-match case - Add clonePathConf helper to create shallow copies safely grpc_client_server.go: - Remove incorrect AppendToOutgoingContext call in server interceptor (that API is for outbound client calls, not server-side handlers) - Rely on request_id.Set and SetTrailer for request ID propagation * fix: treat FilerConf_PathConf as immutable Fix callers that were incorrectly mutating the returned PathConf: - filer_server_handlers_write.go: Use local variable for MaxFileNameLength instead of mutating the shared rule - command_s3_bucket_quota_check.go: Create new PathConf explicitly when modifying config instead of mutating the returned one This allows MatchStorageRule to safely return the singleton or direct references without copying, restoring the memory optimization. Callers must NOT mutate the returned *FilerConf_PathConf. * filer: add ClonePathConf helper for creating mutable copies Add reusable ClonePathConf function that creates a mutable copy of a PathConf. This is useful when callers need to modify config before calling SetLocationConf. Update command_s3_bucket_quota_check.go to use the new helper. Also fix redundant return statement in DeleteLocationConf. * fmt * filer: fix protobuf pool reset to clear internal fields Address code review feedback: 1. resetPbEntry/resetFuseAttributes: Use struct assignment (*e = T{}) instead of field-by-field reset to clear protobuf internal fields (unknownFields, sizeCache) that would otherwise accumulate across pool reuses, causing data corruption or memory bloat. 2. EntryAttributeToExistingPb: Add nil guard for attr parameter to prevent panic if caller passes nil. * log_buffer: reset logEntry before pool return in error path For consistency with success path, reset the logEntry before putting it back in the pool in the error path. This prevents the pooled object from holding references to partially unmarshaled data. * filer: optimize MatchStorageRule and document ClonePathConf 1. Avoid double []byte(path) conversion in multi-match case by converting once and reusing pathBytes. 2. Add IMPORTANT comment to ClonePathConf documenting that it must be kept in sync with filer_pb.FilerConf_PathConf fields when the protobuf evolves. * filer/log_buffer: fix data race and use defer for pool cleanup 1. entry_codec.go EncodeAttributesAndChunks: Fix critical data race - proto.Marshal may return a slice sharing memory with the message. Copy the data before returning message to pool to prevent corruption. 2. entry_codec.go DecodeAttributesAndChunks: Use defer for cleaner pool management, ensuring message is always returned to pool. 3. log_buffer.go readTs: Use defer for pool cleanup, removing duplicated resetLogEntry/Put calls in success and error paths. * filer: fix ClonePathConf field order and add comprehensive test 1. Fix field order in ClonePathConf to match protobuf struct definition (WormGracePeriodSeconds before WormRetentionTimeSeconds). 2. Add TestClonePathConf that constructs a fully-populated PathConf, calls ClonePathConf, and asserts equality of all exported fields. This will catch future schema drift when new fields are added. 3. Add TestClonePathConfNil to verify nil handling. * filer: use reflection in ClonePathConf test to detect schema drift Replace hardcoded field comparisons with reflection-based comparison. This automatically catches: 1. New fields added to the protobuf but not copied in ClonePathConf 2. Missing non-zero test values for any exported field The test iterates over all exported fields using reflect and compares src vs clone values, failing if any field differs. * filer: update EntryAttributeToExistingPb comment to reflect nil handling The function safely handles nil attr by returning early, but the comment incorrectly stated 'attr must not be nil'. Update comment to accurately describe the defensive behavior. * Fix review feedback: restore request ID propagation and remove redundant resets 1. grpc_client_server.go: Restore AppendToOutgoingContext for request ID so handlers making downstream gRPC calls will automatically propagate the request ID to downstream services. 2. entry_codec.go: Remove redundant resetPbEntry calls after Get. The defer block ensures reset before Put, so next Get receives clean object. 3. log_buffer.go: Remove redundant resetLogEntry call after Get for same reason - defer already handles reset before Put.
9 dayschore(deps): bump github.com/quic-go/quic-go from 0.54.1 to 0.57.0 (#7718)dependabot[bot]2-11/+8
Bumps [github.com/quic-go/quic-go](https://github.com/quic-go/quic-go) from 0.54.1 to 0.57.0. - [Release notes](https://github.com/quic-go/quic-go/releases) - [Commits](https://github.com/quic-go/quic-go/compare/v0.54.1...v0.57.0) --- updated-dependencies: - dependency-name: github.com/quic-go/quic-go dependency-version: 0.57.0 dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
9 daysfiler.sync: fix checkpoint not being saved properly (#7719)Chris Lu2-4/+50
* filer.sync: fix race condition on first checkpoint save Initialize lastWriteTime to time.Now() instead of zero time to prevent the first checkpoint save from being triggered immediately when the first event arrives. This gives async jobs time to complete and update the watermark before the checkpoint is saved. Previously, the zero time caused lastWriteTime.Add(3s).Before(now) to be true on the first event, triggering an immediate checkpoint save attempt. But since jobs are processed asynchronously, the watermark was still 0 (initial value), causing the save to be skipped due to the 'if offsetTsNs == 0 { return nil }' check. Fixes #7717 * filer.sync: save checkpoint on graceful shutdown Add graceful shutdown handling to save the final checkpoint when filer.sync is terminated. Previously, any sync progress within the last 3-second checkpoint interval would be lost on shutdown. Changes: - Add syncState struct to track current processor and offset save info - Add atomic pointers syncStateA2B and syncStateB2A for both directions - Register grace.OnInterrupt hook to save checkpoints on shutdown - Modify doSubscribeFilerMetaChanges to update sync state atomically This ensures that when filer.sync is restarted, it resumes from the correct position instead of potentially replaying old events. Fixes #7717
10 daystest: fix master client timeout causing test hangs (#7715)Chris Lu1-141/+108
* test: fix master client timeout causing test hangs Use the main test context for KeepConnectedToMaster instead of creating a separate 60s context. The tests have 180s outer timeouts but the master client was disconnecting after 60s, causing subsequent commands to hang waiting for reconnection. * test: add -peers=none to all test masters and timeout for lock - Add -peers=none flag to all master servers for faster startup - Add tryLockWithTimeout helper to avoid tests hanging on lock acquisition - Skip tests if lock cannot be acquired within 30 seconds * test: extract connectToMasterAndSync helper to reduce duplication * test: fix captureCommandOutput pipe deadlock Close write end of pipe before calling io.ReadAll to signal EOF, otherwise ReadAll blocks forever waiting for more data. * test: fix tryLockWithTimeout to check lock command errors Propagate lock command error through channel and only treat as locked if command succeeded. Previously any completion (including errors) was treated as successful lock acquisition.
10 dayss3: fix presigned POST upload missing slash between bucket and key (#7714)Chris Lu4-3/+414
* s3: fix presigned POST upload missing slash between bucket and key When uploading a file using presigned POST (e.g., boto3.generate_presigned_post), the file was saved with the bucket name and object key concatenated without a slash (e.g., 'my-bucketfilename' instead of 'my-bucket/filename'). The issue was that PostPolicyBucketHandler retrieved the object key from form values without ensuring it had a leading slash, unlike GetBucketAndObject() which normalizes the key. Fixes #7713 * s3: add tests for presigned POST key normalization Add comprehensive tests for PostPolicyBucketHandler to ensure: - Object keys without leading slashes are properly normalized - ${filename} substitution works correctly with normalization - Path construction correctly separates bucket and key - Form value extraction works properly These tests would have caught the bug fixed in the previous commit where keys like 'test_image.png' were concatenated with bucket without a separator, resulting in 'my-buckettest_image.png'. * s3: create normalizeObjectKey function for robust key normalization Address review feedback by creating a reusable normalizeObjectKey function that both adds a leading slash and removes duplicate slashes, aligning with how other handlers process paths (e.g., toFilerPath uses removeDuplicateSlashes). The function handles edge cases like: - Keys without leading slashes (the original bug) - Keys with duplicate slashes (e.g., 'a//b' -> '/a/b') - Keys with leading duplicate slashes (e.g., '///a' -> '/a') Updated tests to use the new function and added TestNormalizeObjectKey for comprehensive coverage of the new function. * s3: move NormalizeObjectKey to s3_constants for shared use Move the NormalizeObjectKey function to the s3_constants package so it can be reused by: - GetBucketAndObject() - now normalizes all object keys from URL paths - GetPrefix() - now normalizes prefix query parameters - PostPolicyBucketHandler - normalizes keys from form values This ensures consistent object key normalization across all S3 API handlers, handling both missing leading slashes and duplicate slashes. Benefits: - Single source of truth for key normalization - GetBucketAndObject now removes duplicate slashes (previously only added leading slash) - All handlers benefit from the improved normalization automatically
10 daysec: add -diskType flag to EC commands for SSD support (#7607)Chris Lu9-97/+1310
* ec: add diskType parameter to core EC functions Add diskType parameter to: - ecBalancer struct - collectEcVolumeServersByDc() - collectEcNodesForDC() - collectEcNodes() - EcBalance() This allows EC operations to target specific disk types (hdd, ssd, etc.) instead of being hardcoded to HardDriveType only. For backward compatibility, all callers currently pass types.HardDriveType as the default value. Subsequent commits will add -diskType flags to the individual EC commands. * ec: update helper functions to use configurable diskType Update the following functions to accept/use diskType parameter: - findEcVolumeShards() - addEcVolumeShards() - deleteEcVolumeShards() - moveMountedShardToEcNode() - countShardsByRack() - pickNEcShardsToMoveFrom() All ecBalancer methods now use ecb.diskType instead of hardcoded types.HardDriveType. Non-ecBalancer callers (like volumeServer.evacuate and ec.rebuild) use types.HardDriveType as the default. Update all test files to pass diskType where needed. * ec: add -diskType flag to ec.balance and ec.encode commands Add -diskType flag to specify the target disk type for EC operations: - ec.balance -diskType=ssd - ec.encode -diskType=ssd The disk type can be 'hdd', 'ssd', or empty for default (hdd). This allows placing EC shards on SSD or other disk types instead of only HDD. Example usage: ec.balance -collection=mybucket -diskType=ssd -apply ec.encode -collection=mybucket -diskType=ssd -force * test: add integration tests for EC disk type support Add integration tests to verify the -diskType flag works correctly: - TestECDiskTypeSupport: Tests EC encode and balance with SSD disk type - TestECDiskTypeMixedCluster: Tests EC operations on a mixed HDD/SSD cluster The tests verify: - Volume servers can be configured with specific disk types - ec.encode accepts -diskType flag and encodes to the correct disk type - ec.balance accepts -diskType flag and balances on the correct disk type - Mixed disk type clusters work correctly with separate collections * ec: add -sourceDiskType to ec.encode and -diskType to ec.decode ec.encode: - Add -sourceDiskType flag to filter source volumes by disk type - This enables tier migration scenarios (e.g., SSD volumes → HDD EC shards) - -diskType specifies target disk type for EC shards ec.decode: - Add -diskType flag to specify source disk type where EC shards are stored - Update collectEcShardIds() and collectEcNodeShardBits() to accept diskType Examples: # Encode SSD volumes to HDD EC shards (tier migration) ec.encode -collection=mybucket -sourceDiskType=ssd -diskType=hdd # Decode EC shards from SSD ec.decode -collection=mybucket -diskType=ssd Integration tests updated to cover new flags. * ec: fix variable shadowing and add -diskType to ec.rebuild and volumeServer.evacuate Address code review comments: 1. Fix variable shadowing in collectEcVolumeServersByDc(): - Rename loop variable 'diskType' to 'diskTypeKey' and 'diskTypeStr' to avoid shadowing the function parameter 2. Fix hardcoded HardDriveType in ecBalancer methods: - balanceEcRack(): use ecb.diskType instead of types.HardDriveType - collectVolumeIdToEcNodes(): use ecb.diskType 3. Add -diskType flag to ec.rebuild command: - Add diskType field to ecRebuilder struct - Pass diskType to collectEcNodes() and addEcVolumeShards() 4. Add -diskType flag to volumeServer.evacuate command: - Add diskType field to commandVolumeServerEvacuate struct - Pass diskType to collectEcVolumeServersByDc() and moveMountedShardToEcNode() * test: add diskType field to ecBalancer in TestPickEcNodeToBalanceShardsInto Address nitpick comment: ensure test ecBalancer struct has diskType field set for consistency with other tests. * ec: filter disk selection by disk type in pickBestDiskOnNode When evacuating or rebalancing EC shards, pickBestDiskOnNode now filters disks by the target disk type. This ensures: 1. EC shards from SSD disks are moved to SSD disks on destination nodes 2. EC shards from HDD disks are moved to HDD disks on destination nodes 3. No cross-disk-type shard movement occurs This maintains the storage tier isolation when moving EC shards between nodes during evacuation or rebalancing operations. * ec: allow disk type fallback during evacuation Update pickBestDiskOnNode to accept a strictDiskType parameter: - strictDiskType=true (balancing): Only use disks of matching type. This maintains storage tier isolation during normal rebalancing. - strictDiskType=false (evacuation): Prefer same disk type, but fall back to other disk types if no matching disk is available. This ensures evacuation can complete even when same-type capacity is insufficient. Priority order for evacuation: 1. Same disk type with lowest shard count (preferred) 2. Different disk type with lowest shard count (fallback) * test: use defer for lock/unlock to prevent lock leaks Use defer to ensure locks are always released, even on early returns or test failures. This prevents lock leaks that could cause subsequent tests to hang or fail. Changes: - Return early if lock acquisition fails - Immediately defer unlock after successful lock - Remove redundant explicit unlock calls at end of tests - Fix unused variable warning (err -> encodeErr/locErr) * ec: dynamically discover disk types from topology for evacuation Disk types are free-form tags (e.g., 'ssd', 'nvme', 'archive') that come from the topology, not a hardcoded set. Only 'hdd' (or empty) is the default disk type. Use collectVolumeDiskTypes() to discover all disk types present in the cluster topology instead of hardcoding [HardDriveType, SsdType]. * test: add evacuation fallback and cross-rack EC placement tests Add two new integration tests: 1. TestEvacuationFallbackBehavior: - Tests that when same disk type has no capacity, shards fall back to other disk types during evacuation - Creates cluster with 1 SSD + 2 HDD servers (limited SSD capacity) - Verifies pickBestDiskOnNode behavior with strictDiskType=false 2. TestCrossRackECPlacement: - Tests EC shard distribution across different racks - Creates cluster with 4 servers in 4 different racks - Verifies shards are spread across multiple racks - Tests that ec.balance respects rack placement Helper functions added: - startLimitedSsdCluster: 1 SSD + 2 HDD servers - startMultiRackCluster: 4 servers in 4 racks - countShardsPerRack: counts EC shards per rack from disk * test: fix collection mismatch in TestCrossRackECPlacement The EC commands were using collection 'rack_test' but uploaded test data uses collection 'test' (default). This caused ec.encode/ec.balance to not find the uploaded volume. Fix: Change EC commands to use '-collection test' to match the uploaded data. Addresses review comment from PR #7607. * test: close log files in MultiDiskCluster.Stop() to prevent FD leaks Track log files in MultiDiskCluster.logFiles and close them in Stop() to prevent file descriptor accumulation in long-running or many-test scenarios. Addresses review comment about logging resources cleanup. * test: improve EC integration tests with proper assertions - Add assertNoFlagError helper to detect flag parsing regressions - Update diskType subtests to fail on flag errors (ec.encode, ec.balance, ec.decode) - Update verify_disktype_flag_parsing to check help output contains diskType - Remove verify_fallback_disk_selection (was documentation-only, not executable) - Add assertion to verify_cross_rack_distribution for minimum 2 racks - Consolidate uploadTestDataWithDiskType to accept collection parameter - Remove duplicate uploadTestDataWithDiskTypeMixed function * test: extract captureCommandOutput helper and fix error handling - Add captureCommandOutput helper to reduce code duplication in diskType tests - Create commandRunner interface to match shell command Do method - Update ec_encode_with_ssd_disktype, ec_balance_with_ssd_disktype, ec_encode_with_source_disktype, ec_decode_with_disktype to use helper - Fix filepath.Glob error handling in countShardsPerRack instead of ignoring it * test: add flag validation to ec_balance_targets_correct_disk_type Add assertNoFlagError calls after ec.balance commands to ensure -diskType flag is properly recognized for both SSD and HDD disk types. * test: add proper assertions for EC command results - ec_encode_with_ssd_disktype: check for expected volume-related errors - ec_balance_with_ssd_disktype: require success with require.NoError - ec_encode_with_source_disktype: check for expected no-volume errors - ec_decode_with_disktype: check for expected no-ec-volume errors - upload_to_ssd_and_hdd: use require.NoError for setup validation Tests now properly fail on unexpected errors rather than just logging. * test: fix missing unlock in ec_encode_with_disk_awareness Add defer unlock pattern to ensure lock is always released, matching the pattern used in other subtests. * test: improve helper robustness - Make assertNoFlagError case-insensitive for pattern matching - Use defer in captureCommandOutput to restore stdout/stderr and close pipe ends to avoid FD leaks even if cmd.Do panics
10 daysfmtchrislu1-33/+33
10 daysfix: filer do not support IP whitelist right now #7094 (#7095)Konstantin Lebedev1-4/+5
* fix: filer do not support IP whitelist right now #7094 * Apply suggestion from @gemini-code-assist[bot] Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
10 daysRemove default concurrent upload/download limits for best performance (#7712)Chris Lu4-9/+9
Change all concurrentUploadLimitMB and concurrentDownloadLimitMB defaults from fixed values (64, 128, 256 MB) to 0 (unlimited). This removes artificial throttling that can limit throughput on high-performance systems, especially on all-flash setups with many cores. Files changed: - volume.go: concurrentUploadLimitMB 256->0, concurrentDownloadLimitMB 256->0 - server.go: filer/volume/s3 concurrent limits 64/128->0 - s3.go: concurrentUploadLimitMB 128->0 - filer.go: concurrentUploadLimitMB 128->0, s3.concurrentUploadLimitMB 128->0 Users can still set explicit limits if needed for resource management.
10 daysfix: weed shell can't connect to master when no volume servers (#7710)Chris Lu1-7/+22
fix: weed shell can't connect to master when no volume servers (#7701) When there are no volume servers registered, the master's KeepConnected handler would not send any initial message to clients. This caused the shell's masterClient to block indefinitely on stream.Recv(), preventing it from setting currentMaster and completing the connection handshake. The fix ensures the master always sends at least one message with leader information to newly connected clients, even when ToVolumeLocations() returns an empty slice.
10 daysfix worker -admin -adminServer error (#7706)MorezMartin1-2/+2
10 daysdocker: add curl for HTTPS healthcheck support (#7709)Chris Lu4-3/+4
Alpine's busybox wget does not support --ca-cert, --certificate, and --private-key options required for HTTPS healthchecks with client certificate authentication. Adding curl to Docker images enables proper HTTPS healthchecks. Fixes #7707
10 daysfix object namechrislu1-2/+3
10 daysmount: add periodic metadata sync to protect chunks from orphan cleanup (#7700)Chris Lu4-0/+180
mount: add periodic metadata flush to protect chunks from orphan cleanup When a file is opened via FUSE mount and written for a long time without being closed, chunks are uploaded to volume servers but the file metadata (containing chunk references) is only saved to the filer on file close. If volume.fsck runs during this window, it may identify these chunks as orphans (not referenced in filer metadata) and purge them, causing data loss. This commit adds a background task that periodically flushes file metadata for open files to the filer, ensuring chunk references are visible to volume.fsck even before files are closed. New option: -metadataFlushSeconds (default: 120) Interval in seconds for flushing dirty file metadata to filer. Set to 0 to disable. Fixes: https://github.com/seaweedfs/seaweedfs/issues/7649
10 daysFix s3 versioning listing bugs (#7705)jfburdet2-184/+199
* fix: add pagination to list-object-versions for buckets with >1000 objects The findVersionsRecursively() function used a fixed limit of 1000 entries without pagination. This caused objects beyond the first 1000 entries (sorted alphabetically) to never appear in list-object-versions responses. Changes: - Add pagination loop using filer.PaginationSize (1024) - Use isLast flag from s3a.list() to detect end of pagination - Track startFrom marker for each page 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: prevent infinite loop in ListObjects when processing .versions directories The doListFilerEntries() function processes .versions directories in a secondary loop after the main entry loop, but failed to update nextMarker. This caused infinite pagination loops when results were truncated, as the same .versions directories would be reprocessed on each page. Bug introduced by: c196d03951a75d3b8976f556cb0400e5b522edeb ("fix listing object versions (#7006)") 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
10 daysfiler: add write batching for FoundationDB store to improve throughput (#7708)Chris Lu3-3/+175
This addresses issue #7699 where FoundationDB filer store had low throughput (~400-500 obj/s) due to each write operation creating a separate transaction. Changes: - Add writeBatcher that collects multiple writes into batched transactions - New config options: batch_size (default: 100), batch_interval (default: 5ms) - Batching provides ~5.7x throughput improvement (from ~456 to ~2600 obj/s) Benchmark results with different batch sizes: - batch_size=1: ~456 obj/s (baseline, no batching) - batch_size=10: ~2621 obj/s (5.7x improvement) - batch_size=16: ~2514 obj/s (5.5x improvement) - batch_size=100: ~2617 obj/s (5.7x improvement) - batch_size=1000: ~2593 obj/s (5.7x improvement) The batch_interval timer (5ms) ensures writes are flushed promptly even when batch is not full, providing good latency characteristics. Addressed review feedback: - Changed wait=false to wait=true in UpdateEntry/DeleteEntry to properly propagate errors to callers - Fixed timer reset race condition by stopping and draining before reset Fixes #7699
11 daysfix: cache successful volume lookups instead of failed ones (#7698)Chris Lu1-3/+4
The condition was inverted - it was caching lookups with errors instead of successful lookups. This caused every replicated write to make a gRPC call to master for volume location lookup, resulting in ~1 second latency for writeToReplicas. The bug particularly affected TTL volumes because: - More unique volumes are created (separate pools per TTL) - Volumes expire and get recreated frequently - Each new volume requires a fresh lookup (cache miss) - Higher volume churn = more cache misses = more master lookups With this fix, successful lookups are cached for 10 minutes, reducing replication latency from ~1s to ~10ms for cached volumes.
11 daysmount: improve EnsureVisited performance with dedup, parallelism, and ↵Chris Lu4-33/+129
batching (#7697) * mount: add singleflight to deduplicate concurrent EnsureVisited calls When multiple goroutines access the same uncached directory simultaneously, they would all make redundant network requests to the filer. This change uses singleflight.Group to ensure only one goroutine fetches the directory entries while others wait for the result. This fixes a race condition where concurrent lookups or readdir operations on the same uncached directory would: 1. Make duplicate network requests to the filer 2. Insert duplicate entries into LevelDB cache 3. Waste CPU and network bandwidth * mount: fetch parent directories in parallel during EnsureVisited Previously, when accessing a deep path like /a/b/c/d, the parent directories were fetched serially from target to root. This change: 1. Collects all uncached directories from target to root first 2. Fetches them all in parallel using errgroup 3. Relies on singleflight (from previous commit) for deduplication This reduces latency when accessing deep uncached paths, especially in high-latency network environments where parallel requests can significantly improve performance. * mount: add batch inserts for LevelDB meta cache When populating the meta cache from filer, entries were inserted one-by-one into LevelDB. This change: 1. Adds BatchInsertEntries method to LevelDBStore that uses LevelDB's native batch write API 2. Updates MetaCache to keep a direct reference to the LevelDB store for batch operations 3. Modifies doEnsureVisited to collect entries and insert them in batches of 100 entries Batch writes are more efficient because: - Reduces number of individual write operations - Reduces disk syncs - Improves throughput for large directories * mount: fix potential nil dereference in MarkChildrenCached Add missing check for inode existence in inode2path map before accessing the InodeEntry. This prevents a potential nil pointer dereference if the inode exists in path2inode but not in inode2path (which could happen due to race conditions or bugs). This follows the same pattern used in IsChildrenCached which properly checks for existence before accessing the entry. * mount: fix batch flush when last entry is hidden The previous batch insert implementation relied on the isLast flag to flush remaining entries. However, if the last entry is a hidden system entry (like 'topics' or 'etc' in root), the callback returns early and the remaining entries in the batch are never flushed. Fix by: 1. Only flush when batch reaches threshold inside the callback 2. Flush any remaining entries after ReadDirAllEntries completes 3. Use error wrapping instead of logging+returning to avoid duplicate logs 4. Create new slice after flush to allow GC of flushed entries 5. Add documentation for batchInsertSize constant This ensures all entries are properly inserted regardless of whether the last entry is hidden, and prevents memory retention issues. * mount: add context support for cancellation in EnsureVisited Thread context.Context through the batch insert call chain to enable proper cancellation and timeout support: 1. Use errgroup.WithContext() so if one fetch fails, others are cancelled 2. Add context parameter to BatchInsertEntries for consistency with InsertEntry 3. Pass context to ReadDirAllEntries for cancellation during network calls 4. Check context cancellation before starting work in doEnsureVisited 5. Use %w for error wrapping to preserve error types for inspection This prevents unnecessary work when one directory fetch fails and makes the batch operations consistent with the existing context-aware APIs.
11 daysmount: improve NFS directory listing (#7696)Chris Lu1-29/+24
mount: remove unused isEarlyTerminated variable The variable was redundant because when processEachEntryFn returns false, we immediately return fuse.OK, so the check was always false.
11 daysfix nfs list with prefix batch scan (#7694)Bruce Zou1-30/+69
* fix nfs list with prefix batch scan * remove else branch
11 daysfix: prevent filer.backup stall in single-filer setups (#7695)Chris Lu6-1/+1090
* fix: prevent filer.backup stall in single-filer setups (#4977) When MetaAggregator.MetaLogBuffer is empty (which happens in single-filer setups with no peers), ReadFromBuffer was returning nil error, causing LoopProcessLogData to enter an infinite wait loop on ListenersCond. This fix returns ResumeFromDiskError instead, allowing SubscribeMetadata to loop back and read from persisted logs on disk. This ensures filer.backup continues processing events even when the in-memory aggregator buffer is empty. Fixes #4977 * test: add integration tests for metadata subscription Add integration tests for metadata subscription functionality: - TestMetadataSubscribeBasic: Tests basic subscription and event receiving - TestMetadataSubscribeSingleFilerNoStall: Regression test for #4977, verifies subscription doesn't stall under high load in single-filer setups - TestMetadataSubscribeResumeFromDisk: Tests resuming subscription from disk Related to #4977 * ci: add GitHub Actions workflow for metadata subscribe tests Add CI workflow that runs on: - Push/PR to master affecting filer, log_buffer, or metadata subscribe code - Runs the integration tests for metadata subscription - Uploads logs on failure for debugging Related to #4977 * fix: use multipart form-data for file uploads in integration tests The filer expects multipart/form-data for file uploads, not raw POST body. This fixes the 'Content-Type isn't multipart/form-data' error. * test: use -peers=none for faster master startup * test: add -peers=none to remaining master startup in ec tests * fix: use filer HTTP port 8888, WithFilerClient adds 10000 for gRPC WithFilerClient calls ToGrpcAddress() which adds 10000 to the port. Passing 18888 resulted in connecting to 28888. Use 8888 instead. * test: add concurrent writes and million updates tests - TestMetadataSubscribeConcurrentWrites: 50 goroutines writing 20 files each - TestMetadataSubscribeMillionUpdates: 1 million metadata entries via gRPC (metadata only, no actual file content for speed) * fix: address PR review comments - Handle os.MkdirAll errors explicitly instead of ignoring - Handle log file creation errors with proper error messages - Replace silent event dropping with 100ms timeout and warning log * Update metadata_subscribe_integration_test.go
11 daysfix: skip log files with deleted volumes in filer backup (#7692)Chris Lu3-15/+78
fix: skip log files with deleted volumes in filer backup (#3720) When filer.backup or filer.meta.backup resumes after being stopped, it may encounter persisted log files stored on volumes that have since been deleted (via volume.deleteEmpty -force). Previously, this caused the backup to get stuck in an infinite retry loop with 'volume X not found' errors. This fix catches 'volume not found' errors when reading log files and skips the problematic file instead of failing. The backup will now: - Log a warning about the missing volume - Skip the problematic log file - Continue with the next log file, allowing progress The VolumeNotFoundPattern regex was already defined but never used - this change puts it to use. Fixes #3720
11 dayshelm: fix admin secret template paths and remove duplicate (#7690)Chris Lu3-10/+55
* add admin and worker to helm charts * workers are stateless, admin is stateful * removed the duplicate admin-deployment.yaml * address comments * address comments * purge * Update README.md * Update k8s/charts/seaweedfs/templates/admin/admin-ingress.yaml Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * address comments * address comments * supports Kubernetes versions from v1.14 to v1.30+, ensuring broad compatibility * add probe for workers * address comments * add a todo * chore: trigger CI * use port name for probes in admin statefulset * add secrets to admin helm chart * fix error .Values.admin.secret.existingSecret * helm: fix admin secret template paths and remove duplicate - Fix value paths to use .Values.admin.secret.existingSecret instead of .Values.existingSecret - Use templated secret name {{ template "seaweedfs.name" . }}-admin-secret - Add .Values.admin.enabled check to admin-secret.yaml - Remove duplicate admin-secret.yaml from templates/ root * helm: address PR review feedback - Only pass adminUser/adminPassword args when auth is enabled (fixes regression) - Use $adminSecretName variable to reduce duplication (DRY) - Only create admin-secret when adminPassword is set - Add documentation comments for existingSecret, userKey, pwKey fields - Clarify that empty adminPassword disables authentication * helm: quote admin credentials to handle spaces * helm: fix yaml lint errors (comment spacing, trailing blank line) * helm: add validation for existingSecret requiring userKey and pwKey --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Ubuntu <morez.martin@gmail.com>
11 daysHelm Charts: add admin and worker to helm charts (#7688)Chris Lu11-1/+1225
* add admin and worker to helm charts * workers are stateless, admin is stateful * removed the duplicate admin-deployment.yaml * address comments * address comments * purge * Update README.md * Update k8s/charts/seaweedfs/templates/admin/admin-ingress.yaml Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * address comments * address comments * supports Kubernetes versions from v1.14 to v1.30+, ensuring broad compatibility * add probe for workers * address comments * add a todo * chore: trigger CI * use port name for probes in admin statefulset * fix: remove trailing blank line in values.yaml * address code review feedback - Quote admin credentials in shell command to handle special characters - Remove unimplemented capabilities (remote, replication) from worker defaults - Add security note about admin password character restrictions --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
11 daysfix: return error on size mismatch in ReadNeedleMeta for consistency (#7687)Chris Lu1-0/+1
* fix: return error on size mismatch in ReadNeedleMeta for consistency When ReadNeedleMeta encounters a size mismatch at offset >= MaxPossibleVolumeSize, it previously just continued without returning an error, potentially using wrong data. This fix makes ReadNeedleMeta consistent with ReadBytes (needle_read.go), which properly returns an error in both cases: - ErrorSizeMismatch when offset < MaxPossibleVolumeSize (to trigger retry at offset+32GB) - A descriptive error when offset >= MaxPossibleVolumeSize (after retry failed) Fixes #7673 * refactor: use more accurate error message for size mismatch
11 daysfix: prevent empty .vif files from ec.decode causing parse errors (#7686)Chris Lu2-0/+17
* fix: prevent empty .vif files from ec.decode causing parse errors When ec.decode copies .vif files from EC shard nodes, if a source node doesn't have the .vif file, an empty .vif file was created on the target node. This caused volume.configure.replication to fail with 'proto: syntax error' when trying to parse the empty file. This fix: 1. In writeToFile: Remove empty files when no data was written (source file was not found) to avoid leaving corrupted empty files 2. In MaybeLoadVolumeInfo: Handle empty .vif files gracefully by treating them as non-existent, allowing the system to create a proper one Fixes #7666 * refactor: remove redundant dst.Close() and add error logging Address review feedback: - Remove redundant dst.Close() call since defer already handles it - Add error logging for os.Remove() failure
11 daysmount: fix weed inode nlookup do not equel kernel inode nlookup (#7682)Chen Pu2-18/+28
* mount: fix weed inode nlookup do not equel kernel inode nlookup * mount: add underflow protection for nlookup decrement in Forget * mount: use consistent == 0 check for uint64 nlookup * Update weed/mount/inode_to_path.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * mount: snapshot data before unlock in Forget to avoid using deleted InodeEntry --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
11 dayss3api: remove redundant auth verification in getRequestDataReader (#7685)Chris Lu1-6/+1
* s3api: remove redundant auth verification in getRequestDataReader The handlers PutObjectHandler and PutObjectPartHandler are already wrapped with s3a.iam.Auth() middleware which performs signature verification via authRequest() before the handler is invoked. The signature verification for authTypeSignedV2, authTypePresignedV2, authTypePresigned, and authTypeSigned in getRequestDataReader was therefore redundant. The newChunkedReader() call for streaming auth types is kept as it's needed to parse the chunked transfer encoding and extract the actual data. Fixes #7683 * simplify switch to if statement for single condition
11 dayss3: add s3:ExistingObjectTag condition support for bucket policies (#7677)Chris Lu11-105/+762
* s3: add s3:ExistingObjectTag condition support in policy engine Add support for s3:ExistingObjectTag/<tag-key> condition keys in bucket policies, allowing access control based on object tags. Changes: - Add ObjectEntry field to PolicyEvaluationArgs (entry.Extended metadata) - Update EvaluateConditions to handle s3:ExistingObjectTag/<key> format - Extract tag value from entry metadata using X-Amz-Tagging-<key> prefix This enables policies like: { "Condition": { "StringEquals": { "s3:ExistingObjectTag/status": ["public"] } } } Fixes: https://github.com/seaweedfs/seaweedfs/issues/7447 * s3: update EvaluatePolicy to accept object entry for tag conditions Update BucketPolicyEngine.EvaluatePolicy to accept objectEntry parameter (entry.Extended metadata) for evaluating tag-based policy conditions. Changes: - Add objectEntry parameter to EvaluatePolicy method - Update callers in auth_credentials.go and s3api_bucket_handlers.go - Pass nil for objectEntry in auth layer (entry fetched later in handlers) For tag-based conditions to work, handlers should call EvaluatePolicy with the object's entry.Extended after fetching the entry from filer. * s3: add tests for s3:ExistingObjectTag policy conditions Add comprehensive tests for object tag-based policy conditions: - TestExistingObjectTagCondition: Basic tag matching scenarios - Matching/non-matching tag values - Missing tags, no tags, empty tags - Multiple tags with one matching - TestExistingObjectTagConditionMultipleTags: Multiple tag conditions - Both tags match - Only one tag matches - TestExistingObjectTagDenyPolicy: Deny policies with tag conditions - Default allow without tag - Deny when specific tag present * s3: document s3:ExistingObjectTag support and feature status Update policy engine documentation: - Add s3:ExistingObjectTag/<tag-key> to supported condition keys - Add 'Object Tag-Based Access Control' section with examples - Add 'Feature Status' section with implemented and planned features Planned features for future implementation: - s3:RequestObjectTag/<key> - s3:RequestObjectTagKeys - s3:x-amz-server-side-encryption - Cross-account access * Implement tag-based policy re-check in handlers - Add checkPolicyWithEntry helper to S3ApiServer for handlers to re-check policy after fetching object entry (for s3:ExistingObjectTag conditions) - Add HasPolicyForBucket method to policy engine for efficient check - Integrate policy re-check in GetObjectHandler after entry is fetched - Integrate policy re-check in HeadObjectHandler after entry is fetched - Update auth_credentials.go comments to explain two-phase evaluation - Update documentation with supported operations for tag-based conditions This implements 'Approach 1' where handlers re-check the policy with the object entry after fetching it, allowing tag-based conditions to be properly evaluated. * Add integration tests for s3:ExistingObjectTag conditions - Add TestCheckPolicyWithEntry: tests checkPolicyWithEntry helper with various tag scenarios (matching tags, non-matching tags, empty entry, nil entry) - Add TestCheckPolicyWithEntryNoPolicyForBucket: tests early return when no policy - Add TestCheckPolicyWithEntryNilPolicyEngine: tests nil engine handling - Add TestCheckPolicyWithEntryDenyPolicy: tests deny policies with tag conditions - Add TestHasPolicyForBucket: tests HasPolicyForBucket method These tests cover the Phase 2 policy evaluation with object entry metadata, ensuring tag-based conditions are properly evaluated. * Address code review nitpicks - Remove unused extractObjectTags placeholder function (engine.go) - Add clarifying comment about s3:ExistingObjectTag/<key> evaluation - Consolidate duplicate tag-based examples in README - Factor out tagsToEntry helper to package level in tests * Address code review feedback - Fix unsafe type assertions in GetObjectHandler and HeadObjectHandler when getting identity from context (properly handle type assertion failure) - Extract getConditionContextValue helper to eliminate duplicated logic between EvaluateConditions and EvaluateConditionsLegacy - Ensure consistent handling of missing condition keys (always return empty slice) * Fix GetObjectHandler to match HeadObjectHandler pattern Add safety check for nil objectEntryForSSE before tag-based policy evaluation, ensuring tag-based conditions are always evaluated rather than silently skipped if entry is unexpectedly nil. Addresses review comment from Copilot. * Fix HeadObject action name in docs for consistency Change 'HeadObject' to 's3:HeadObject' to match other action names. * Extract recheckPolicyWithObjectEntry helper to reduce duplication Move the repeated identity extraction and policy re-check logic from GetObjectHandler and HeadObjectHandler into a shared helper method. * Add validation for empty tag key in s3:ExistingObjectTag condition Prevent potential issues with malformed policies containing s3:ExistingObjectTag/ (empty tag key after slash).
12 daysfix: add missing backslash for volume extraArgs in helm chart (#7676)Chris Lu44-58/+83
Fixes #7467 The -mserver argument line in volume-statefulset.yaml was missing a trailing backslash, which prevented extraArgs from being passed to the weed volume process. Also: - Extracted master server list generation logic into shared helper templates in _helpers.tpl for better maintainability - Updated all occurrences of deprecated -mserver flag to -master across docker-compose files, test files, and documentation
12 daysfix: prevent makeslice panic in ReadNeedleMeta with corrupted needle (#7675)Chris Lu1-0/+3
* fix: prevent makeslice panic in ReadNeedleMeta with corrupted needle When a needle's DataSize in the .dat file is corrupted to a very large value, the calculation of metaSize can become negative, causing a panic with 'makeslice: len out of range' when creating the metadata slice. This fix adds validation to check if metaSize is negative before creating the slice, returning a descriptive error instead of panicking. Fixes #7475 * Update weed/storage/needle/needle_read_page.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
12 daysmount: add mutex to DirectoryHandle to fix race condition (#7674)Chris Lu1-18/+32
* mount: add mutex to DirectoryHandle to fix race condition When using Ganesha NFS on top of FUSE mount, ls operations would hang forever on directories with hundreds of files. This was caused by a race condition in DirectoryHandle where multiple concurrent readdir operations could modify shared state (entryStream, entryStreamOffset, isFinished) without synchronization. The fix adds a mutex to DirectoryHandle and holds it for the entire duration of doReadDirectory. This serializes concurrent readdir calls on the same handle, which is the correct behavior for a directory handle and fixes the race condition. Key changes: - Added sync.Mutex to DirectoryHandle struct - Lock the mutex at the start of doReadDirectory - This ensures thread-safe access to entryStream and other state The lock is per-handle (not global), so different directories can still be listed concurrently. Only concurrent operations on the same directory handle are serialized. Fixes: https://github.com/seaweedfs/seaweedfs/issues/7672 * mount: add mutex to DirectoryHandle to fix race condition When using Ganesha NFS on top of FUSE mount, ls operations would hang forever on directories with hundreds of files. This was caused by a race condition in DirectoryHandle where multiple concurrent readdir operations could modify shared state (entryStream, entryStreamOffset, isFinished) without synchronization. The fix adds a mutex to DirectoryHandle and holds it for the entire duration of doReadDirectory. This serializes concurrent readdir calls on the same handle, which is the correct behavior for a directory handle and fixes the race condition. Key changes: - Added sync.Mutex to DirectoryHandle struct - Lock the mutex at the start of doReadDirectory - Optimized reset() to reuse slice capacity and allow GC of old entries The lock is per-handle (not global), so different directories can still be listed concurrently. Only concurrent operations on the same directory handle are serialized. Fixes: https://github.com/seaweedfs/seaweedfs/issues/7672
12 dayssts: limit session duration to incoming token's exp claim (#7670)Chris Lu4-10/+279
* sts: limit session duration to incoming token's exp claim This fixes the issue where AssumeRoleWithWebIdentity would issue sessions that outlive the source identity token's expiration. For use cases like GitLab CI Jobs where the ID Token has an exp claim limited to the CI job's timeout, the STS session should not exceed that expiration. Changes: - Add TokenExpiration field to ExternalIdentity struct - Extract exp/iat/nbf claims in OIDC provider's ValidateToken - Pass token expiration from Authenticate to ExternalIdentity - Modify calculateSessionDuration to cap at source token's exp - Add comprehensive tests for the new behavior Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7653 * refactor: reduce duplication in time claim extraction Use a loop over claim names instead of repeating the same extraction logic three times for exp, iat, and nbf claims. * address review: add defense-in-depth for expired tokens - Handle already-expired tokens defensively with 1 minute minimum duration - Enforce MaxSessionLength from config as additional cap - Fix potential nil dereference in test mock - Add test case for expired token scenario * remove issue reference from test * fix: remove early return to ensure MaxSessionLength is always checked
12 daysfix: restore volume mount when VolumeConfigure fails (#7669)Chris Lu1-0/+5
* fix: restore volume mount when VolumeConfigure fails When volume.configure.replication command fails (e.g., due to corrupted .vif file), the volume was left unmounted and the master was already notified that the volume was deleted, causing the volume to disappear. This fix attempts to re-mount the volume when ConfigureVolume fails, restoring the volume state and preventing data loss. Fixes #7666 * include mount restore error in response message
12 daysFix webhook duplicate deliveries and POST to GET conversion (#7668)Chris Lu4-16/+564
* Fix webhook duplicate deliveries and POST to GET conversion Fixes #7667 This commit addresses two critical issues with the webhook notification system: 1. Duplicate webhook deliveries based on worker count 2. POST requests being converted to GET when following redirects Issue 1: Multiple webhook deliveries ------------------------------------ Problem: The webhook queue was creating multiple handlers (one per worker) that all subscribed to the same topic. With Watermill's gochannel, each handler creates a separate subscription, and all subscriptions receive their own copy of every message, resulting in duplicate webhook calls equal to the worker count. Solution: Use a single handler instead of multiple handlers to ensure each webhook event is sent only once, regardless of worker configuration. Issue 2: POST to GET conversion with intelligent redirect handling ------------------------------------------------------------------ Problem: When webhook endpoints returned redirects (301/302/303), Go's default HTTP client would automatically follow them and convert POST requests to GET requests per HTTP specification. Solution: Implement intelligent redirect handling that: - Prevents automatic redirects to preserve POST method - Manually follows redirects by recreating POST requests - Caches the final redirect destination for performance - Invalidates cache and retries on failures (network or HTTP errors) - Provides automatic recovery from cached endpoint failures Benefits: - Webhooks are now sent exactly once per event - POST method is always preserved through redirects - Reduced latency through redirect destination caching - Automatic failover when cached destinations become unavailable - Thread-safe concurrent webhook delivery Testing: - Added TestQueueNoDuplicateWebhooks to verify single delivery - Added TestHttpClientFollowsRedirectAsPost for redirect handling - Added TestHttpClientUsesCachedRedirect for caching behavior - Added cache invalidation tests for error scenarios - All 18 webhook tests pass successfully * Address code review comments - Add maxWebhookRetryDepth constant to avoid magic number - Extract cache invalidation logic into invalidateCache() helper method - Fix redirect handling to properly follow redirects even on retry attempts - Remove misleading comment about nWorkers controlling handler parallelism - Fix test assertions to match actual execution flow - Remove trailing whitespace in test file All tests passing. * Refactor: use setFinalURL() instead of invalidateCache() Replace invalidateCache() with more explicit setFinalURL() function. This is cleaner as it makes the intent clear - we're setting the URL (either to a value or to empty string to clear it), rather than having a separate function just for clearing. No functional changes, all tests passing. * Add concurrent webhook delivery using nWorkers configuration Webhooks were previously sent sequentially (one-by-one), which could be a performance bottleneck for high-throughput scenarios. Now nWorkers configuration is properly used to control concurrent webhook delivery. Implementation: - Added semaphore channel (buffered to nWorkers capacity) - handleWebhook acquires semaphore slot before sending (blocks if at capacity) - Releases slot after webhook completes - Allows up to nWorkers concurrent webhook HTTP requests Benefits: - Improved throughput for slow webhook endpoints - nWorkers config now has actual purpose (was validated but unused) - Default 5 workers provides good balance - Configurable from 1-100 workers based on needs Example performance improvement: - Before: 500ms webhook latency = ~2 webhooks/sec max - After (5 workers): 500ms latency = ~10 webhooks/sec - After (10 workers): 500ms latency = ~20 webhooks/sec All tests passing. * Replace deprecated AddNoPublisherHandler with AddConsumerHandler AddNoPublisherHandler is deprecated in Watermill. Use AddConsumerHandler instead, which is the current recommended API for handlers that only consume messages without publishing. No functional changes, all tests passing. * Drain response bodies to enable HTTP connection reuse Added drainBody() calls in all code paths to ensure response bodies are consumed before returning. This is critical for HTTP keep-alive connection reuse. Without draining: - Connections are closed after each request - New TCP handshake + TLS handshake for every webhook - Higher latency and resource usage With draining: - Connections are reused via HTTP keep-alive - Significant performance improvement for repeated webhooks - Lower latency (no handshake overhead) - Reduced resource usage Implementation: - Added drainBody() helper that reads up to 1MB (prevents memory issues) - Drain on success path (line 161) - Drain on error responses before retry (lines 119, 152) - Drain on redirect responses before following (line 118) - Already had drainResponse() for network errors (line 99) All tests passing. * Use existing CloseResponse utility instead of custom drainBody Replaced custom drainBody() function with the existing util_http.CloseResponse() utility which is already used throughout the codebase. This provides: - Consistent behavior with rest of the codebase - Better logging (logs bytes drained via CountingReader) - Full body drainage (not limited to 1MB) - Cleaner code (no duplication) CloseResponse properly drains and closes the response body to enable HTTP keep-alive connection reuse. All tests passing. * Fix: Don't overwrite original error when draining response Before: err was being overwritten by drainResponse() result After: Use drainErr to avoid losing the original client.Do() error This was a subtle bug where if drainResponse() succeeded (returned nil), we would lose the original network error and potentially return a confusing error message. All tests passing. * Optimize HTTP client: reuse client and remove redundant timeout 1. Reuse single http.Client instance instead of creating new one per request - Reduces allocation overhead - More efficient for high-volume webhooks 2. Remove redundant timeout configuration - Before: timeout set on both context AND http.Client - After: timeout only on context (cleaner, context fires first anyway) Performance benefits: - Reduced GC pressure (fewer client allocations) - Better connection pooling (single transport instance) - Cleaner code (no redundancy) All tests passing.
12 daysNit: have `ec.encode` exit immediately if no volumes are processed. (#7654)Lisandro Pin1-0/+4
* Nit: have `ec.encode` exit immediately if no volumes are processed. * Update weed/shell/command_ec_encode.go Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> --------- Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
12 daysUpdate notification.tomlchrislu1-1/+3
12 daysAdded a complete webhook configuration examplechrislu1-0/+15
12 dayschore(deps): bump golang.org/x/sync from 0.18.0 to 0.19.0 (#7664)dependabot[bot]4-6/+6
* chore(deps): bump golang.org/x/sync from 0.18.0 to 0.19.0 Bumps [golang.org/x/sync](https://github.com/golang/sync) from 0.18.0 to 0.19.0. - [Commits](https://github.com/golang/sync/compare/v0.18.0...v0.19.0) --- updated-dependencies: - dependency-name: golang.org/x/sync dependency-version: 0.19.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * tidy --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com>
12 dayschore(deps): bump github.com/aws/aws-sdk-go-v2/credentials from 1.18.20 to ↵dependabot[bot]2-15/+15
1.19.3 (#7663) chore(deps): bump github.com/aws/aws-sdk-go-v2/credentials Bumps [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) from 1.18.20 to 1.19.3. - [Release notes](https://github.com/aws/aws-sdk-go-v2/releases) - [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json) - [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.20...service/pi/v1.19.3) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go-v2/credentials dependency-version: 1.19.3 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
12 dayschore(deps): bump github.com/klauspost/reedsolomon from 1.12.5 to 1.12.6 (#7662)dependabot[bot]4-6/+6
* chore(deps): bump github.com/klauspost/reedsolomon from 1.12.5 to 1.12.6 Bumps [github.com/klauspost/reedsolomon](https://github.com/klauspost/reedsolomon) from 1.12.5 to 1.12.6. - [Release notes](https://github.com/klauspost/reedsolomon/releases) - [Commits](https://github.com/klauspost/reedsolomon/compare/v1.12.5...v1.12.6) --- updated-dependencies: - dependency-name: github.com/klauspost/reedsolomon dependency-version: 1.12.6 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * tidy --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com>