aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2025-10-28[master] vaccum fix warn (#7312)Konstantin Lebedev1-1/+1
2025-10-28IAM: add support for advanced IAM config file to server command (#7317)Nial2-1/+20
* IAM: add support for advanced IAM config file to server command * Add support for advanced IAM config file in S3 options * Fix S3 IAM config handling to simplify checks for configuration presence * simplify * simplify again * copy the value * const --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
2025-10-28docker containers: add non-root user (#7399)Chris Lu4-13/+44
* add non-root user * using -g more clearly expresses the intent of setting the primary group for the new user * no cache * read only * specific perm
2025-10-28S3: auth supports X-Forwarded-Host and X-Forwarded-Port (#7398)Chris Lu3-1/+236
* add fix and tests * address comments * idiomatic * ipv6
2025-10-27go fmtchrislu32-363/+337
2025-10-27Erasure Coding: Ec refactoring (#7396)Chris Lu15-296/+564
* refactor: add ECContext structure to encapsulate EC parameters - Create ec_context.go with ECContext struct - NewDefaultECContext() creates context with default 10+4 configuration - Helper methods: CreateEncoder(), ToExt(), String() - Foundation for cleaner function signatures - No behavior change, still uses hardcoded 10+4 * refactor: update ec_encoder.go to use ECContext - Add WriteEcFilesWithContext() and RebuildEcFilesWithContext() functions - Keep old functions for backward compatibility (call new versions) - Update all internal functions to accept ECContext parameter - Use ctx.DataShards, ctx.ParityShards, ctx.TotalShards consistently - Use ctx.CreateEncoder() instead of hardcoded reedsolomon.New() - Use ctx.ToExt() for shard file extensions - No behavior change, still uses default 10+4 configuration * refactor: update ec_volume.go to use ECContext - Add ECContext field to EcVolume struct - Initialize ECContext with default configuration in NewEcVolume() - Update LocateEcShardNeedleInterval() to use ECContext.DataShards - Phase 1: Always uses default 10+4 configuration - No behavior change * refactor: add EC shard count fields to VolumeInfo protobuf - Add data_shards_count field (field 8) to VolumeInfo message - Add parity_shards_count field (field 9) to VolumeInfo message - Fields are optional, 0 means use default (10+4) - Backward compatible: fields added at end - Phase 1: Foundation for future customization * refactor: regenerate protobuf Go files with EC shard count fields - Regenerated volume_server_pb/*.go with new EC fields - DataShardsCount and ParityShardsCount accessors added to VolumeInfo - No behavior change, fields not yet used * refactor: update VolumeEcShardsGenerate to use ECContext - Create ECContext with default configuration in VolumeEcShardsGenerate - Use ecCtx.TotalShards and ecCtx.ToExt() in cleanup - Call WriteEcFilesWithContext() instead of WriteEcFiles() - Save EC configuration (DataShardsCount, ParityShardsCount) to VolumeInfo - Log EC context being used - Phase 1: Always uses default 10+4 configuration - No behavior change * fmt * refactor: update ec_test.go to use ECContext - Update TestEncodingDecoding to create and use ECContext - Update validateFiles() to accept ECContext parameter - Update removeGeneratedFiles() to use ctx.TotalShards and ctx.ToExt() - Test passes with default 10+4 configuration * refactor: use EcShardConfig message instead of separate fields * optimize: pre-calculate row sizes in EC encoding loop * refactor: replace TotalShards field with Total() method - Remove TotalShards field from ECContext to avoid field drift - Add Total() method that computes DataShards + ParityShards - Update all references to use ctx.Total() instead of ctx.TotalShards - Read EC config from VolumeInfo when loading EC volumes - Read data shard count from .vif in VolumeEcShardsToVolume - Use >= instead of > for exact boundary handling in encoding loops * optimize: simplify VolumeEcShardsToVolume to use existing EC context - Remove redundant CollectEcShards call - Remove redundant .vif file loading - Use v.ECContext.DataShards directly (already loaded by NewEcVolume) - Slice tempShards instead of collecting again * refactor: rename MaxShardId to MaxShardCount for clarity - Change from MaxShardId=31 to MaxShardCount=32 - Eliminates confusing +1 arithmetic (MaxShardId+1) - More intuitive: MaxShardCount directly represents the limit fix: support custom EC ratios beyond 14 shards in VolumeEcShardsToVolume - Add MaxShardId constant (31, since ShardBits is uint32) - Use MaxShardId+1 (32) instead of TotalShardsCount (14) for tempShards buffer - Prevents panic when slicing for volumes with >14 total shards - Critical fix for custom EC configurations like 20+10 * fix: add validation for EC shard counts from VolumeInfo - Validate DataShards/ParityShards are positive and within MaxShardCount - Prevent zero or invalid values that could cause divide-by-zero - Fallback to defaults if validation fails, with warning log - VolumeEcShardsGenerate now preserves existing EC config when regenerating - Critical safety fix for corrupted or legacy .vif files * fix: RebuildEcFiles now loads EC config from .vif file - Critical: RebuildEcFiles was always using default 10+4 config - Now loads actual EC config from .vif file when rebuilding shards - Validates config before use (positive shards, within MaxShardCount) - Falls back to default if .vif missing or invalid - Prevents data corruption when rebuilding custom EC volumes * add: defensive validation for dataShards in VolumeEcShardsToVolume - Validate dataShards > 0 and <= MaxShardCount before use - Prevents panic from corrupted or uninitialized ECContext - Returns clear error message instead of panic - Defense-in-depth: validates even though upstream should catch issues * fix: replace TotalShardsCount with MaxShardCount for custom EC ratio support Critical fixes to support custom EC ratios > 14 shards: disk_location_ec.go: - validateEcVolume: Check shards 0-31 instead of 0-13 during validation - removeEcVolumeFiles: Remove shards 0-31 instead of 0-13 during cleanup ec_volume_info.go ShardBits methods: - ShardIds(): Iterate up to MaxShardCount (32) instead of TotalShardsCount (14) - ToUint32Slice(): Iterate up to MaxShardCount (32) - IndexToShardId(): Iterate up to MaxShardCount (32) - MinusParityShards(): Remove shards 10-31 instead of 10-13 (added note about Phase 2) - Minus() shard size copy: Iterate up to MaxShardCount (32) - resizeShardSizes(): Iterate up to MaxShardCount (32) Without these changes: - Custom EC ratios > 14 total shards would fail validation on startup - Shards 14-31 would never be discovered or cleaned up - ShardBits operations would miss shards >= 14 These changes are backward compatible - MaxShardCount (32) includes the default TotalShardsCount (14), so existing 10+4 volumes work as before. * fix: replace TotalShardsCount with MaxShardCount in critical data structures Critical fixes for buffer allocations and loops that must support custom EC ratios up to 32 shards: Data Structures: - store_ec.go:354: Buffer allocation for shard recovery (bufs array) - topology_ec.go:14: EcShardLocations.Locations fixed array size - command_ec_rebuild.go:268: EC shard map allocation - command_ec_common.go:626: Shard-to-locations map allocation Shard Discovery Loops: - ec_task.go:378: Loop to find generated shard files - ec_shard_management.go: All 8 loops that check/count EC shards These changes are critical because: 1. Buffer allocations sized to 14 would cause index-out-of-bounds panics when accessing shards 14-31 2. Fixed arrays sized to 14 would truncate shard location data 3. Loops limited to 0-13 would never discover/manage shards 14-31 Note: command_ec_encode.go:208 intentionally NOT changed - it creates shard IDs to mount after encoding. In Phase 1 we always generate 14 shards, so this remains TotalShardsCount and will be made dynamic in Phase 2 based on actual EC context. Without these fixes, custom EC ratios > 14 total shards would cause: - Runtime panics (array index out of bounds) - Data loss (shards 14-31 never discovered/tracked) - Incomplete shard management (missing shards not detected) * refactor: move MaxShardCount constant to ec_encoder.go Moved MaxShardCount from ec_volume_info.go to ec_encoder.go to group it with other shard count constants (DataShardsCount, ParityShardsCount, TotalShardsCount). This improves code organization and makes it easier to understand the relationship between these constants. Location: ec_encoder.go line 22, between TotalShardsCount and MinTotalDisks * improve: add defensive programming and better error messages for EC Code review improvements from CodeRabbit: 1. ShardBits Guardrails (ec_volume_info.go): - AddShardId, RemoveShardId: Reject shard IDs >= MaxShardCount - HasShardId: Return false for out-of-range shard IDs - Prevents silent no-ops from bit shifts with invalid IDs 2. Future-Proof Regex (disk_location_ec.go): - Updated regex from \.ec[0-9][0-9] to \.ec\d{2,3} - Now matches .ec00 through .ec999 (currently .ec00-.ec31 used) - Supports future increases to MaxShardCount beyond 99 3. Better Error Messages (volume_grpc_erasure_coding.go): - Include valid range (1..32) in dataShards validation error - Helps operators quickly identify the problem 4. Validation Before Save (volume_grpc_erasure_coding.go): - Validate ECContext (DataShards > 0, ParityShards > 0, Total <= MaxShardCount) - Log EC config being saved to .vif for debugging - Prevents writing invalid configs to disk These changes improve robustness and debuggability without changing core functionality. * fmt * fix: critical bugs from code review + clean up comments Critical bug fixes: 1. command_ec_rebuild.go: Fixed indentation causing compilation error - Properly nested if/for blocks in registerEcNode 2. ec_shard_management.go: Fixed isComplete logic incorrectly using MaxShardCount - Changed from MaxShardCount (32) back to TotalShardsCount (14) - Default 10+4 volumes were being incorrectly reported as incomplete - Missing shards 14-31 were being incorrectly reported as missing - Fixed in 4 locations: volume completeness checks and getMissingShards 3. ec_volume_info.go: Fixed MinusParityShards removing too many shards - Changed from MaxShardCount (32) back to TotalShardsCount (14) - Was incorrectly removing shard IDs 10-31 instead of just 10-13 Comment cleanup: - Removed Phase 1/Phase 2 references (development plan context) - Replaced with clear statements about default 10+4 configuration - SeaweedFS repo uses fixed 10+4 EC ratio, no phases needed Root cause: Over-aggressive replacement of TotalShardsCount with MaxShardCount. MaxShardCount (32) is the limit for buffer allocations and shard ID loops, but TotalShardsCount (14) must be used for default EC configuration logic. * fix: add defensive bounds checks and compute actual shard counts Critical fixes from code review: 1. topology_ec.go: Add defensive bounds checks to AddShard/DeleteShard - Prevent panic when shardId >= MaxShardCount (32) - Return false instead of crashing on out-of-range shard IDs 2. command_ec_common.go: Fix doBalanceEcShardsAcrossRacks - Was using hardcoded TotalShardsCount (14) for all volumes - Now computes actual totalShardsForVolume from rackToShardCount - Fixes incorrect rebalancing for volumes with custom EC ratios - Example: 5+2=7 shards would incorrectly use 14 as average These fixes improve robustness and prepare for future custom EC ratios without changing current behavior for default 10+4 volumes. Note: MinusParityShards and ec_task.go intentionally NOT changed for seaweedfs repo - these will be enhanced in seaweed-enterprise repo where custom EC ratio configuration is added. * fmt * style: make MaxShardCount type casting explicit in loops Improved code clarity by explicitly casting MaxShardCount to the appropriate type when used in loop comparisons: - ShardId comparisons: Cast to ShardId(MaxShardCount) - uint32 comparisons: Cast to uint32(MaxShardCount) Changed in 5 locations: - Minus() loop (line 90) - ShardIds() loop (line 143) - ToUint32Slice() loop (line 152) - IndexToShardId() loop (line 219) - resizeShardSizes() loop (line 248) This makes the intent explicit and improves type safety readability. No functional changes - purely a style improvement.
2025-10-27fix nilchrislu1-1/+4
2025-10-27revert back s3 in helm chart to falsechrislu1-1/+1
fix https://github.com/seaweedfs/seaweedfs/issues/7375
2025-10-273.993.99chrislu1-1/+1
2025-10-27chore(deps): bump github.com/rclone/rclone from 1.71.1 to 1.71.2 (#7393)dependabot[bot]2-6/+6
Bumps [github.com/rclone/rclone](https://github.com/rclone/rclone) from 1.71.1 to 1.71.2. - [Release notes](https://github.com/rclone/rclone/releases) - [Changelog](https://github.com/rclone/rclone/blob/master/RELEASE.md) - [Commits](https://github.com/rclone/rclone/compare/v1.71.1...v1.71.2) --- updated-dependencies: - dependency-name: github.com/rclone/rclone dependency-version: 1.71.2 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27chore(deps): bump github.com/aws/aws-sdk-go-v2/credentials from 1.18.10 to ↵dependabot[bot]2-33/+33
1.18.19 (#7392) chore(deps): bump github.com/aws/aws-sdk-go-v2/credentials Bumps [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) from 1.18.10 to 1.18.19. - [Release notes](https://github.com/aws/aws-sdk-go-v2/releases) - [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json) - [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.10...config/v1.18.19) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go-v2/credentials dependency-version: 1.18.19 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27chore(deps): bump github.com/prometheus/procfs from 0.17.0 to 0.19.1 (#7388)dependabot[bot]4-6/+6
* chore(deps): bump github.com/prometheus/procfs from 0.17.0 to 0.19.1 Bumps [github.com/prometheus/procfs](https://github.com/prometheus/procfs) from 0.17.0 to 0.19.1. - [Release notes](https://github.com/prometheus/procfs/releases) - [Commits](https://github.com/prometheus/procfs/compare/v0.17.0...v0.19.1) --- updated-dependencies: - dependency-name: github.com/prometheus/procfs dependency-version: 0.19.1 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * go mod tidy --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com>
2025-10-27chore(deps): bump golang.org/x/net from 0.45.0 to 0.46.0 (#7386)dependabot[bot]4-6/+6
* chore(deps): bump golang.org/x/net from 0.45.0 to 0.46.0 Bumps [golang.org/x/net](https://github.com/golang/net) from 0.45.0 to 0.46.0. - [Commits](https://github.com/golang/net/compare/v0.45.0...v0.46.0) --- updated-dependencies: - dependency-name: golang.org/x/net dependency-version: 0.46.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * go mod tidy --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com>
2025-10-27go mod tidychrislu2-3/+3
2025-10-27chore(deps): bump github.com/getsentry/sentry-go from 0.35.3 to 0.36.1 (#7389)dependabot[bot]2-3/+3
Bumps [github.com/getsentry/sentry-go](https://github.com/getsentry/sentry-go) from 0.35.3 to 0.36.1. - [Release notes](https://github.com/getsentry/sentry-go/releases) - [Changelog](https://github.com/getsentry/sentry-go/blob/master/CHANGELOG.md) - [Commits](https://github.com/getsentry/sentry-go/compare/v0.35.3...v0.36.1) --- updated-dependencies: - dependency-name: github.com/getsentry/sentry-go dependency-version: 0.36.1 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27chore(deps): bump github.com/tarantool/go-tarantool/v2 from 2.4.0 to 2.4.1 ↵dependabot[bot]2-3/+3
(#7391) chore(deps): bump github.com/tarantool/go-tarantool/v2 Bumps [github.com/tarantool/go-tarantool/v2](https://github.com/tarantool/go-tarantool) from 2.4.0 to 2.4.1. - [Release notes](https://github.com/tarantool/go-tarantool/releases) - [Changelog](https://github.com/tarantool/go-tarantool/blob/master/CHANGELOG.md) - [Commits](https://github.com/tarantool/go-tarantool/compare/v2.4.0...v2.4.1) --- updated-dependencies: - dependency-name: github.com/tarantool/go-tarantool/v2 dependency-version: 2.4.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27chore(deps): bump github.com/pkg/sftp from 1.13.9 to 1.13.10 (#7390)dependabot[bot]2-3/+3
Bumps [github.com/pkg/sftp](https://github.com/pkg/sftp) from 1.13.9 to 1.13.10. - [Release notes](https://github.com/pkg/sftp/releases) - [Commits](https://github.com/pkg/sftp/compare/v1.13.9...v1.13.10) --- updated-dependencies: - dependency-name: github.com/pkg/sftp dependency-version: 1.13.10 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-27chore(deps): bump actions/upload-artifact from 4 to 5 (#7387)dependabot[bot]7-21/+21
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4 to 5. - [Release notes](https://github.com/actions/upload-artifact/releases) - [Commits](https://github.com/actions/upload-artifact/compare/v4...v5) --- updated-dependencies: - dependency-name: actions/upload-artifact dependency-version: '5' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-26fix lintchrislu1-1/+1
2025-10-263.99chrislu1-2/+2
2025-10-26fix commentchrislu1-1/+1
2025-10-26Volume Server: handle incomplete ec encoding (#7384)Chris Lu5-16/+1301
* handle incomplete ec encoding * unit tests * simplify, and better logs * Update disk_location_ec.go When loadEcShards() fails partway through, some EC shards may already be loaded into the l.ecVolumes map in memory. The previous code only cleaned up filesystem files but left orphaned in-memory state, which could cause memory leaks and inconsistent state. * address comments * Performance: Avoid Double os.Stat() Call * Platform Compatibility: Use filepath.Join * in memory cleanup * Update disk_location_ec.go * refactor * Added Shard Size Validation * check ec shard sizes * validate shard size * calculate expected shard size * refactoring * minor * fix shard directory * 10GB sparse files can be slow or fail on non-sparse FS. Use 10MB to hit SmallBlockSize math (1MB shards) deterministically. * grouping logic should be updated to use both collection and volumeId to ensure correctness * unexpected error * handle exceptions in tests; use constants * The check for orphaned shards should be performed for the previous volume before resetting sameVolumeShards for the new volume. * address comments * Eliminated Redundant Parsing in checkOrphanedShards * minor * Avoid misclassifying local EC as distributed when .dat stat errors occur; also standardize unload-before-remove. * fmt * refactor * refactor * adjust to warning
2025-10-25s3: combine all signature verification checks into a single function (#7330)Tom Crasset6-332/+592
2025-10-25Filer: batch deletion operations to return individual error results (#7382)Chris Lu5-44/+86
* batch deletion operations to return individual error results Modify batch deletion operations to return individual error results instead of one aggregated error, enabling better tracking of which specific files failed to delete (helping reduce orphan file issues). * Simplified logging logic * Optimized nested loop * handles the edge case where the RPC succeeds but connection cleanup fails * simplify * simplify * ignore 'not found' errors here
2025-10-24Shell: Added a helper function `isHelpRequest()` (#7380)Chris Lu12-7/+87
* Added a helper function `isHelpRequest()` * also handles combined short flags like -lh or -hl * Created handleHelpRequest() helper function encapsulates both: Checking for help flags Printing the help message * Limit to reasonable length (2-4 chars total) to avoid matching long options like -verbose
2025-10-24Store full shell command in shell history (#7378)Yavor Konstantinov1-2/+4
Store shell command in history before parsing Store the shell command in history before parsing it. This will allow users to press the 'Up' arrow and see the entire command.
2025-10-24Volume Server: Unexpected Deletion of Remote Tier Data (#7377)Chris Lu1-0/+13
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error * When jwt.signing.read.key is enabled in security.toml, the volume server requires JWT tokens for all read operations. * reuse fileId * refactor * fix deleting remote tier * simplify the fix * Update weed/storage/volume_loading.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/storage/volume_loading.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/storage/volume_loading.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-24Clients to volume server requires JWT tokens for all read operations (#7376)Chris Lu6-31/+66
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error * When jwt.signing.read.key is enabled in security.toml, the volume server requires JWT tokens for all read operations. * reuse fileId * refactor --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-24[Admin UI] Login not possible due to securecookie error (#7374)Chris Lu3-27/+87
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-24fix: Use a mixed of virtual and path styles within a single subdomain (#7357)Konstantin Lebedev2-4/+277
* fix: Use a mixed of virtual and path styles within a single subdomain * address comments * add tests --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
2025-10-24Improve admin urls (#7370)Yavor Konstantinov4-125/+158
* Improve Master and Volume URLs in admin dashboard - Add clickable URL for master node. - Refactor Volume server URL to use PublicURL if set. 'address' is used as fallback. * Make volume servers show in consistent order - Sort servers by name to ensure predictable order after each refresh. * address comment --------- Co-authored-by: chrislu <chris.lu@gmail.com>
2025-10-24avoid repeated reading disk (#7369)Chris Lu6-74/+237
* avoid repeated reading disk * checks both flush time AND read position advancement * wait on cond * fix reading Gap detection and skipping to earliest memory time Time-based reads that include events at boundary times for first reads (offset ≤ 0) Aggregated subscriber wake-up via ListenersWaits signaling * address comments
2025-10-24Revert "fix reading"chrislu6-182/+58
This reverts commit 64a4ce93580258b8d5537416dfb5d9a1a8f14ee2.
2025-10-24fix readingchrislu6-58/+182
Gap detection and skipping to earliest memory time Time-based reads that include events at boundary times for first reads (offset ≤ 0) Aggregated subscriber wake-up via ListenersWaits signaling
2025-10-23Fix 'NaN%' issue when running volume.fsck (#7368)Yavor Konstantinov1-1/+6
* Fix 'NaN%' issue when running volume.fsck - Running `volume.fsck` on an empty cluster will display 'NaN%'. * Refactor - Extract cound of orphan chunks in summary to new var. - Restore handling for 'NaN' for individual volumes. Its not necessary because the check is already done. * Make code more idiomatic
2025-10-23S3 API: Fix SSE-S3 decryption on object download (#7366)Chris Lu17-256/+1762
* S3 API: Fix SSE-S3 decryption on object download Fixes #7363 This commit adds missing SSE-S3 decryption support when downloading objects from SSE-S3 encrypted buckets. Previously, SSE-S3 encrypted objects were returned in their encrypted form, causing data corruption and hash mismatches. Changes: - Updated detectPrimarySSEType() to detect SSE-S3 encrypted objects by examining chunk metadata and distinguishing SSE-S3 from SSE-KMS - Added SSE-S3 handling in handleSSEResponse() to route to new handler - Implemented handleSSES3Response() for both single-part and multipart SSE-S3 encrypted objects with proper decryption - Implemented createMultipartSSES3DecryptedReader() for multipart objects with per-chunk decryption using stored IVs - Updated addSSEHeadersToResponse() to include SSE-S3 response headers The fix follows the existing SSE-C and SSE-KMS patterns, using the envelope encryption architecture where each object's DEK is encrypted with the KEK stored in the filer. * Add comprehensive tests for SSE-S3 decryption - TestSSES3EncryptionDecryption: basic encryption/decryption - TestSSES3IsRequestInternal: request detection - TestSSES3MetadataSerialization: metadata serialization/deserialization - TestDetectPrimarySSETypeS3: SSE type detection for various scenarios - TestAddSSES3HeadersToResponse: response header validation - TestSSES3EncryptionWithBaseIV: multipart encryption with base IV - TestSSES3WrongKeyDecryption: wrong key error handling - TestSSES3KeyGeneration: key generation and uniqueness - TestSSES3VariousSizes: encryption/decryption with various data sizes - TestSSES3ResponseHeaders: response header correctness - TestSSES3IsEncryptedInternal: metadata-based encryption detection - TestSSES3InvalidMetadataDeserialization: error handling for invalid metadata - TestGetSSES3Headers: header generation - TestProcessSSES3Request: request processing - TestGetSSES3KeyFromMetadata: key extraction from metadata - TestSSES3EnvelopeEncryption: envelope encryption correctness - TestValidateSSES3Key: key validation All tests pass successfully, providing comprehensive coverage for the SSE-S3 decryption fix. * Address PR review comments 1. Fix resource leak in createMultipartSSES3DecryptedReader: - Wrap decrypted reader with closer to properly release resources - Ensure underlying chunkReader is closed when done 2. Handle mixed-encryption objects correctly: - Check chunk encryption type before attempting decryption - Pass through non-SSE-S3 chunks unmodified - Log encryption type for debugging 3. Improve SSE type detection logic: - Add explicit case for aws:kms algorithm - Handle unknown algorithms gracefully - Better documentation for tie-breaking precedence 4. Document tie-breaking behavior: - Clarify that mixed encryption indicates potential corruption - Explicit precedence order: SSE-C > SSE-KMS > SSE-S3 These changes address high-severity resource management issues and improve robustness when handling edge cases and mixed-encryption scenarios. * Fix IV retrieval for small/inline SSE-S3 encrypted files Critical bug fix: The previous implementation only looked for the IV in chunk metadata, which would fail for small files stored inline (without chunks). Changes: - Check object-level metadata (sseS3Key.IV) first for inline files - Fallback to first chunk metadata only if object-level IV not found - Improved error message to indicate both locations were checked This ensures small SSE-S3 encrypted files (stored inline in entry.Content) can be properly decrypted, as their IV is stored in the object-level SeaweedFSSSES3Key metadata rather than in chunk metadata. Fixes the high-severity issue identified in PR review. * Clean up unused SSE metadata helper functions Remove legacy SSE metadata helper functions that were never fully implemented or used: Removed unused functions: - StoreSSECMetadata() / GetSSECMetadata() - StoreSSEKMSMetadata() / GetSSEKMSMetadata() - StoreSSES3Metadata() / GetSSES3Metadata() - IsSSEEncrypted() - GetSSEAlgorithm() Removed unused constants: - MetaSSEAlgorithm - MetaSSECKeyMD5 - MetaSSEKMSKeyID - MetaSSEKMSEncryptedKey - MetaSSEKMSContext - MetaSSES3KeyID These functions were from an earlier design where IV and other metadata would be stored in common entry.Extended keys. The actual implementations use type-specific serialization: - SSE-C: Uses StoreIVInMetadata()/GetIVFromMetadata() directly for IV - SSE-KMS: Serializes entire SSEKMSKey structure as JSON (includes IV) - SSE-S3: Serializes entire SSES3Key structure as JSON (includes IV) This follows Option A: SSE-S3 uses envelope encryption pattern like SSE-KMS, where IV is stored within the serialized key metadata rather than in a separate metadata field. Kept functions still in use: - StoreIVInMetadata() - Used by SSE-C - GetIVFromMetadata() - Used by SSE-C and streaming copy - MetaSSEIV constant - Used by SSE-C All tests pass after cleanup. * Rename SSE metadata functions to clarify SSE-C specific usage Renamed functions and constants to explicitly indicate they are SSE-C specific, improving code clarity: Renamed: - MetaSSEIV → MetaSSECIV - StoreIVInMetadata() → StoreSSECIVInMetadata() - GetIVFromMetadata() → GetSSECIVFromMetadata() Updated all usages across: - s3api_key_rotation.go - s3api_streaming_copy.go - s3api_object_handlers_copy.go - s3_sse_copy_test.go - s3_sse_test_utils_test.go Rationale: These functions are exclusively used by SSE-C for storing/retrieving the IV in entry.Extended metadata. SSE-KMS and SSE-S3 use different approaches (IV stored in serialized key structures), so the generic names were misleading. The new names make it clear these are part of the SSE-C implementation. All tests pass. * Add integration tests for SSE-S3 end-to-end encryption/decryption These integration tests cover the complete encrypt->store->decrypt cycle that was missing from the original test suite. They would have caught the IV retrieval bug for inline files. Tests added: - TestSSES3EndToEndSmallFile: Tests inline files (10, 50, 256 bytes) * Specifically tests the critical IV retrieval path for inline files * This test explicitly checks the bug we fixed where inline files couldn't retrieve their IV from object-level metadata - TestSSES3EndToEndChunkedFile: Tests multipart encrypted files * Verifies per-chunk metadata serialization/deserialization * Tests that each chunk can be independently decrypted with its own IV - TestSSES3EndToEndWithDetectPrimaryType: Tests type detection * Verifies inline vs chunked SSE-S3 detection * Ensures SSE-S3 is distinguished from SSE-KMS Note: Full HTTP handler tests (PUT -> GET through actual handlers) would require a complete mock server with filer connections, which is complex. These tests focus on the critical decrypt path and data flow. Why these tests are important: - Unit tests alone don't catch integration issues - The IV retrieval bug existed because there was no end-to-end test - These tests simulate the actual storage/retrieval flow - They verify the complete encryption architecture works correctly All tests pass. * Fix TestValidateSSES3Key expectations to match actual implementation The ValidateSSES3Key function only validates that the key struct is not nil, but doesn't validate the Key field contents or size. The test was expecting validation that doesn't exist. Updated test cases: - Nil key struct → should error (correct) - Valid key → should not error (correct) - Invalid key size → should not error (validation doesn't check this) - Nil key bytes → should not error (validation doesn't check this) Added comments to clarify what the current validation actually checks. This matches the behavior of ValidateSSEKMSKey and ValidateSSECKey which also only check for nil struct, not field contents. All SSE tests now pass. * Improve ValidateSSES3Key to properly validate key contents Enhanced the validation function from only checking nil struct to comprehensive validation of all key fields: Validations added: 1. Key bytes not nil 2. Key size exactly 32 bytes (SSES3KeySize) 3. Algorithm must be "AES256" (SSES3Algorithm) 4. Key ID must not be empty 5. IV length must be 16 bytes if set (optional - set during encryption) Test improvements (10 test cases): - Nil key struct - Valid key without IV - Valid key with IV - Invalid key size (too small) - Invalid key size (too large) - Nil key bytes - Empty key ID - Invalid algorithm - Invalid IV length - Empty IV (allowed - set during encryption) This matches the robustness of SSE-C and SSE-KMS validation and will catch configuration errors early rather than failing during encryption/decryption. All SSE tests pass. * Replace custom string helper functions with strings.Contains Address Gemini Code Assist review feedback: - Remove custom contains() and findSubstring() helper functions - Use standard library strings.Contains() instead - Add strings import This makes the code more idiomatic and easier to maintain by using the standard library instead of reimplementing functionality. Changes: - Added "strings" to imports - Replaced contains(err.Error(), tc.errorMsg) with strings.Contains(err.Error(), tc.errorMsg) - Removed 15 lines of custom helper code All tests pass. * filer fix reading and writing SSE-S3 headers * filter out seaweedfs internal headers * Update weed/s3api/s3api_object_handlers.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/s3api/s3_validation_utils.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update s3api_streaming_copy.go * remove fallback * remove redundant check * refactor * remove extra object fetching * in case object is not found * Correct Version Entry for SSE Routing * Proper Error Handling for SSE Entry Fetching * Eliminated All Redundant Lookups * Removed brittle “exactly 5 successes/failures” assertions. Added invariant checks total recorded attempts equals request count, successes never exceed capacity, failures cover remaining attempts, final AvailableSpace matches capacity - successes. * refactor * fix test * Fixed Broken Fallback Logic * refactor * Better Error for Encryption Type Mismatch * refactor --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-23Improve-worker (#7367)Mariano Ntrougkas4-36/+12
* ♻️ refactor(worker): remove goto * ♻️ refactor(worker): let manager loop exit by itself * ♻️ refactor(worker): fix race condition when closing worker CloseSend is not safe to call when another goroutine concurrently calls Send. streamCancel already handles proper stream closure. Also, streamExit signal should be called AFTER sending shutdownMsg Now the worker has no race condition if stopped during any moment (hopefully, tested with -race flag) * 🐛 fix(task_logger): deadlock in log closure * 🐛 fix(balance): fix balance task Removes the outdated "UnloadVolume" step as it is handled by "DeleteVolume". #7346
2025-10-23fixing auto complete (#7365)Chris Lu1-5/+69
* fixing auto complete * Update weed/command/autocomplete.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/command/autocomplete.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-22Update volume_growth_reservation_test.gochrislu1-0/+5
2025-10-22Update volume_growth_reservation_test.gochrislu1-6/+8
2025-10-22♻️ refactor(worker): decouple state management using command-query ↵Mariano Ntrougkas2-578/+753
pattern (#7354) * ♻️ refactor(worker): decouple state management using command-query pattern This commit eliminates all uses of sync.Mutex across the `worker.go` and `client.go` components, changing how mutable state is accessed and modified. Single Owner Principle is now enforced. - Guarantees thread safety and prevents data races by ensuring that only one goroutine ever modifies or reads state. Impact: Improves application concurrency, reliability, and maintainability by isolating state concerns. * 🐛 fix(worker): fix race condition when closing The use of select/default is wrong for mandatory shutdown signals. * 🐛 fix(worker): do not get tickers in every iteration * 🐛 fix(worker): fix race condition when closing pt 2 refactor `handleOutgoing` to mirror the non-blocking logic of `handleIncoming` * address comments * To ensure stream errors are always processed, the send should be blocking. * avoid blocking the manager loop while waiting for tasks to complete --------- Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: Chris Lu <chris.lu@gmail.com>
2025-10-22S3: Avoid in-memory map concurrent writes in SSE-S3 key manager (#7358)Chris Lu4-40/+247
* Fix concurrent map writes in SSE-S3 key manager This commit fixes issue #7352 where parallel uploads to SSE-S3 enabled buckets were causing 'fatal error: concurrent map writes' crashes. The SSES3KeyManager struct had an unsynchronized map that was being accessed from multiple goroutines during concurrent PUT operations. Changes: - Added sync.RWMutex to SSES3KeyManager struct - Protected StoreKey() with write lock - Protected GetKey() with read lock - Updated GetOrCreateKey() with proper read/write locking pattern including double-check to prevent race conditions All existing SSE tests pass successfully. Fixes #7352 * Improve SSE-S3 key manager with envelope encryption Replace in-memory key storage with envelope encryption using a super key (KEK). Instead of storing DEKs in a map, the key manager now: - Uses a randomly generated 256-bit super key (KEK) - Encrypts each DEK with the super key using AES-GCM - Stores the encrypted DEK in object metadata - Decrypts the DEK on-demand when reading objects Benefits: - Eliminates unbounded memory growth from caching DEKs - Provides better security with authenticated encryption (AES-GCM) - Follows envelope encryption best practices (similar to AWS KMS) - No need for mutex-protected map lookups on reads - Each object's encrypted DEK is self-contained in its metadata This approach matches the design pattern used in the local KMS provider and is more suitable for production use. * Persist SSE-S3 KEK in filer for multi-server support Store the SSE-S3 super key (KEK) in the filer at /.seaweedfs/s3/kek instead of generating it per-server. This ensures: 1. **Multi-server consistency**: All S3 API servers use the same KEK 2. **Persistence across restarts**: KEK survives server restarts 3. **Centralized management**: KEK stored in filer, accessible to all servers 4. **Automatic initialization**: KEK is created on first startup if it doesn't exist The KEK is: - Stored as hex-encoded bytes in filer - Protected with file mode 0600 (read/write for owner only) - Located in /.seaweedfs/s3/ directory (mode 0700) - Loaded on S3 API server startup - Reused across all S3 API server instances This matches the architecture of centralized configuration in SeaweedFS and enables proper SSE-S3 support in multi-server deployments. * Change KEK storage location to /etc/s3/kek Move SSE-S3 KEK from /.seaweedfs/s3/kek to /etc/s3/kek for better organization and consistency with other SeaweedFS configuration files. The /etc directory is the standard location for configuration files in SeaweedFS. * use global sse-se key manager when copying * Update volume_growth_reservation_test.go * Rename KEK file to sse_kek for clarity Changed /etc/s3/kek to /etc/s3/sse_kek to make it clear this key is specifically for SSE-S3 encryption, not for other KMS purposes. This improves clarity and avoids potential confusion with the separate KMS provider system used for SSE-KMS. * Use constants for SSE-S3 KEK directory and file name Refactored to use named constants instead of string literals: - SSES3KEKDirectory = "/etc/s3" - SSES3KEKParentDir = "/etc" - SSES3KEKDirName = "s3" - SSES3KEKFileName = "sse_kek" This improves maintainability and makes it easier to change the storage location if needed in the future. * Address PR review: Improve error handling and robustness Addresses review comments from https://github.com/seaweedfs/seaweedfs/pull/7358#pullrequestreview-3367476264 Critical fixes: 1. Distinguish between 'not found' and other errors when loading KEK - Only generate new KEK if ErrNotFound - Fail fast on connectivity/permission errors to prevent data loss - Prevents creating new KEK that would make existing data undecryptable 2. Make SSE-S3 initialization failure fatal - Return error instead of warning when initialization fails - Prevents server from running in broken state 3. Improve directory creation error handling - Only ignore 'file exists' errors - Fail on permission/connectivity errors These changes ensure the SSE-S3 key manager is robust against transient errors and prevents accidental data loss. * Fix KEK path conflict with /etc/s3 file Changed KEK storage from /etc/s3/sse_kek to /etc/seaweedfs/s3_sse_kek to avoid conflict with the circuit breaker config at /etc/s3. The /etc/s3 path is used by CircuitBreakerConfigDir and may exist as a file (circuit_breaker.json), causing the error: 'CreateEntry /etc/s3/sse_kek: /etc/s3 should be a directory' New KEK location: /etc/seaweedfs/s3_sse_kek This uses the seaweedfs subdirectory which is more appropriate for internal SeaweedFS configuration files. Fixes startup failure when /etc/s3 exists as a file. * Revert KEK path back to /etc/s3/sse_kek Changed back from /etc/seaweedfs/s3_sse_kek to /etc/s3/sse_kek as requested. The /etc/s3 directory will be created properly when it doesn't exist. * Fix directory creation with proper ModeDir flag Set FileMode to uint32(0755 | os.ModeDir) when creating /etc/s3 directory to ensure it's created as a directory, not a file. Without the os.ModeDir flag, the entry was being created as a file, which caused the error 'CreateEntry: /etc/s3 is a file' when trying to create the KEK file inside it. Uses 0755 permissions (rwxr-xr-x) for the directory and adds os import for os.ModeDir constant.
2025-10-21[weed] update volume.fix.replication description (#7340)nightcoffee3-4/+4
* [weed] update volume.fix.replication description * Update master-cloud.toml * Update master.toml
2025-10-21go mod tidyChris Lu1-2/+2
2025-10-21Merge branch 'master' of https://github.com/seaweedfs/seaweedfsChris Lu2-2/+5
2025-10-21Switch to seaweedfs/cockroachdb-parser directlyChris Lu3-7/+5
Removed the replace directive and updated all references to use github.com/seaweedfs/cockroachdb-parser directly instead of redirecting from github.com/cockroachdb/cockroachdb-parser. Changes: - Updated import in weed/query/engine/cockroach_parser.go - Removed replace directive from go.mod - Now using seaweedfs/cockroachdb-parser@909763b17138 which includes all 32-bit architecture fixes and uses seaweedfs namespace throughout Build verified successfully for GOOS=openbsd GOARCH=arm.
2025-10-21Update cockroachdb-parser to fix 32-bit buildsChris Lu2-5/+5
Use seaweedfs/cockroachdb-parser v0.0.0-20251021182748-d0c58c67297e via replace directive to fix building on 32-bit architectures like OpenBSD ARM. The replace directive ensures all imports of github.com/cockroachdb/cockroachdb-parser use the seaweedfs fork which includes fixes for: - Integer overflow in tsearch evaluation (int -> int64) - Flag type overflow in lexbase and tree packages (int -> int64)
2025-10-21Update cockroachdb-parser to fix 32-bit builds3.98Chris Lu2-4/+5
Updated to seaweedfs/cockroachdb-parser v0.0.0-20251021182748-d0c58c67297e which includes fixes for building on 32-bit architectures like OpenBSD ARM. Fixes: - Integer overflow in tsearch evaluation (int -> int64) - Flag type overflow in lexbase and tree packages (int -> int64)
2025-10-21update cockroachdb-parser due to build issuesChris Lu2-1/+3
2025-10-203.98Chris Lu2-3/+3