aboutsummaryrefslogtreecommitdiff
path: root/weed/s3api/s3api_object_handlers_test.go
AgeCommit message (Collapse)AuthorFilesLines
2025-11-26Support multiple filers for S3 and IAM servers with automatic failover (#7550)Chris Lu1-1/+1
* Support multiple filers for S3 and IAM servers with automatic failover This change adds support for multiple filer addresses in the 'weed s3' and 'weed iam' commands, enabling high availability through automatic failover. Key changes: - Updated S3ApiServerOption.Filer to Filers ([]pb.ServerAddress) - Updated IamServerOption.Filer to Filers ([]pb.ServerAddress) - Modified -filer flag to accept comma-separated addresses - Added getFilerAddress() helper methods for backward compatibility - Updated all filer client calls to support multiple addresses - Uses pb.WithOneOfGrpcFilerClients for automatic failover Usage: weed s3 -filer=localhost:8888,localhost:8889 weed iam -filer=localhost:8888,localhost:8889 The underlying FilerClient already supported multiple filers with health tracking and automatic failover - this change exposes that capability through the command-line interface. * Add filer discovery: treat initial filers as seeds and discover peers from master Enhances FilerClient to automatically discover additional filers in the same filer group by querying the master server. This allows users to specify just a few seed filers, and the client will discover all other filers in the cluster. Key changes to wdclient/FilerClient: - Added MasterClient, FilerGroup, and DiscoveryInterval fields - Added thread-safe filer list management with RWMutex - Implemented discoverFilers() background goroutine - Uses cluster.ListExistingPeerUpdates() to query master for filers - Automatically adds newly discovered filers to the list - Added Close() method to clean up discovery goroutine New FilerClientOption fields: - MasterClient: enables filer discovery from master - FilerGroup: specifies which filer group to discover - DiscoveryInterval: how often to refresh (default 5 minutes) Usage example: masterClient := wdclient.NewMasterClient(...) filerClient := wdclient.NewFilerClient( []pb.ServerAddress{"localhost:8888"}, // seed filers grpcDialOption, dataCenter, &wdclient.FilerClientOption{ MasterClient: masterClient, FilerGroup: "my-group", }, ) defer filerClient.Close() The initial filers act as seeds - the client discovers and adds all other filers in the same group from the master. Discovered filers are added dynamically without removing existing ones (relying on health checks for unavailable filers). * Address PR review comments: implement full failover for IAM operations Critical fixes based on code review feedback: 1. **IAM API Failover (Critical)**: - Replace pb.WithGrpcFilerClient with pb.WithOneOfGrpcFilerClients in: * GetS3ApiConfigurationFromFiler() * PutS3ApiConfigurationToFiler() * GetPolicies() * PutPolicies() - Now all IAM operations support automatic failover across multiple filers 2. **Validation Improvements**: - Add validation in NewIamApiServerWithStore() to require at least one filer - Add validation in NewS3ApiServerWithStore() to require at least one filer - Add warning log when no filers configured for credential store 3. **Error Logging**: - Circuit breaker now logs when config load fails instead of silently ignoring - Helps operators understand why circuit breaker limits aren't applied 4. **Code Quality**: - Use ToGrpcAddress() for filer address in credential store setup - More consistent with rest of codebase and future-proof These changes ensure IAM operations have the same high availability guarantees as S3 operations, completing the multi-filer failover implementation. * Fix IAM manager initialization: remove code duplication, add TODO for HA Addresses review comment on s3api_server.go:145 Changes: - Remove duplicate code for getting first filer address - Extract filerAddr variable once and reuse - Add TODO comment documenting the HA limitation for IAM manager - Document that loadIAMManagerFromConfig and NewS3IAMIntegration need updates to support multiple filers for full HA Note: This is a known limitation when using filer-backed IAM stores. The interfaces need to be updated to accept multiple filer addresses. For now, documenting this limitation clearly. * Document credential store HA limitation with TODO Addresses review comment on auth_credentials.go:149 Changes: - Add TODO comment documenting that SetFilerClient interface needs update for multi-filer support - Add informative log message indicating HA limitation - Document that this is a known limitation for filer-backed credential stores The SetFilerClient interface currently only accepts a single filer address. To properly support HA, the credential store interfaces need to be updated to handle multiple filer addresses. * Track current active filer in FilerClient for better HA Add GetCurrentFiler() method to FilerClient that returns the currently active filer based on the filerIndex which is updated on successful operations. This provides better availability than always using the first filer. Changes: - Add FilerClient.GetCurrentFiler() method that returns current active filer - Update S3ApiServer.getFilerAddress() to use FilerClient's current filer - Add fallback to first filer if FilerClient not yet initialized - Document IAM limitation (doesn't have FilerClient access) Benefits: - Single-filer operations (URLs, ReadFilerConf, etc.) now use the currently active/healthy filer - Better distribution and failover behavior - FilerClient's round-robin and health tracking automatically determines which filer to use * Document ReadFilerConf HA limitation in lifecycle handlers Addresses review comment on s3api_bucket_handlers.go:880 Add comment documenting that ReadFilerConf uses the current active filer from FilerClient (which is better than always using first filer), but doesn't have built-in multi-filer failover. Add TODO to update filer.ReadFilerConf to support multiple filers for complete HA. For now, it uses the currently active/healthy filer tracked by FilerClient which provides reasonable availability. * Document multipart upload URL HA limitation Addresses review comment on s3api_object_handlers_multipart.go:442 Add comment documenting that part upload URLs point to the current active filer (tracked by FilerClient), which is better than always using the first filer but still creates a potential point of failure if that filer becomes unavailable during upload. Suggest TODO solutions: - Use virtual hostname/load balancer for filers - Have S3 server proxy uploads to healthy filers Current behavior provides reasonable availability by using the currently active/healthy filer rather than being pinned to first filer. * Document multipart completion Location URL limitation Addresses review comment on filer_multipart.go:187 Add comment documenting that the Location URL in CompleteMultipartUpload response points to the current active filer (tracked by FilerClient). Note that clients should ideally use the S3 API endpoint rather than this direct URL. If direct access is attempted and the specific filer is unavailable, the request will fail. Current behavior uses the currently active/healthy filer rather than being pinned to the first filer, providing better availability. * Make credential store use current active filer for HA Update FilerEtcStore to use a function that returns the current active filer instead of a fixed address, enabling high availability. Changes: - Add SetFilerAddressFunc() method to FilerEtcStore - Store uses filerAddressFunc instead of fixed filerGrpcAddress - withFilerClient() calls the function to get current active filer - Keep SetFilerClient() for backward compatibility (marked deprecated) - Update S3ApiServer to pass FilerClient.GetCurrentFiler to store Benefits: - Credential store now uses currently active/healthy filer - Automatic failover when filer becomes unavailable - True HA for credential operations - Backward compatible with old SetFilerClient interface This addresses the credential store limitation - no longer pinned to first filer, uses FilerClient's tracked current active filer. * Clarify multipart URL comments: filer address not used for uploads Update comments to reflect that multipart upload URLs are not actually used for upload traffic - uploads go directly to volume servers. Key clarifications: - genPartUploadUrl: Filer address is parsed out, only path is used - CompleteMultipartUpload Location: Informational field per AWS S3 spec - Actual uploads bypass filer proxy and go directly to volume servers The filer address in these URLs is NOT a HA concern because: 1. Part uploads: URL is parsed for path, upload goes to volume servers 2. Location URL: Informational only, clients use S3 endpoint This addresses the observation that S3 uploads don't go through filers, only metadata operations do. * Remove filer address from upload paths - pass path directly Eliminate unnecessary filer address from upload URLs by passing file paths directly instead of full URLs that get immediately parsed. Changes: - Rename genPartUploadUrl() → genPartUploadPath() (returns path only) - Rename toFilerUrl() → toFilerPath() (returns path only) - Update putToFiler() to accept filePath instead of uploadUrl - Remove URL parsing code (no longer needed) - Remove net/url import (no longer used) - Keep old function names as deprecated wrappers for compatibility Benefits: - Cleaner code - no fake URL construction/parsing - No dependency on filer address for internal operations - More accurate naming (these are paths, not URLs) - Eliminates confusion about HA concerns This completely removes the filer address from upload operations - it was never actually used for routing, only parsed for the path. * Remove deprecated functions: use new path-based functions directly Remove deprecated wrapper functions and update all callers to use the new function names directly. Removed: - genPartUploadUrl() → all callers now use genPartUploadPath() - toFilerUrl() → all callers now use toFilerPath() - SetFilerClient() → removed along with fallback code Updated: - s3api_object_handlers_multipart.go: uploadUrl → filePath - s3api_object_handlers_put.go: uploadUrl → filePath, versionUploadUrl → versionFilePath - s3api_object_versioning.go: toFilerUrl → toFilerPath - s3api_object_handlers_test.go: toFilerUrl → toFilerPath - auth_credentials.go: removed SetFilerClient fallback - filer_etc_store.go: removed deprecated SetFilerClient method Benefits: - Cleaner codebase with no deprecated functions - All variable names accurately reflect that they're paths, not URLs - Single interface for credential stores (SetFilerAddressFunc only) All code now consistently uses the new path-based approach. * Fix toFilerPath: remove URL escaping for raw file paths The toFilerPath function should return raw file paths, not URL-escaped paths. URL escaping was needed when the path was embedded in a URL (old toFilerUrl), but now that we pass paths directly to putToFiler, they should be unescaped. This fixes S3 integration test failures: - test_bucket_listv2_encoding_basic - test_bucket_list_encoding_basic - test_bucket_listv2_delimiter_whitespace - test_bucket_list_delimiter_whitespace The tests were failing because paths were double-encoded (escaped when stored, then escaped again when listed), resulting in %252B instead of %2B for '+' characters. Root cause: When we removed URL parsing in putToFiler, we should have also removed URL escaping in toFilerPath since paths are now used directly without URL encoding/decoding. * Add thread safety to FilerEtcStore and clarify credential store comments Address review suggestions for better thread safety and code clarity: 1. **Thread Safety**: Add RWMutex to FilerEtcStore - Protects filerAddressFunc and grpcDialOption from concurrent access - Initialize() uses write lock when setting function - SetFilerAddressFunc() uses write lock - withFilerClient() uses read lock to get function and dial option - GetPolicies() uses read lock to check if configured 2. **Improved Error Messages**: - Prefix errors with "filer_etc:" for easier debugging - "filer address not configured" → "filer_etc: filer address function not configured" - "filer address is empty" → "filer_etc: filer address is empty" 3. **Clarified Comments**: - auth_credentials.go: Clarify that initial setup is temporary - Document that it's updated in s3api_server.go after FilerClient creation - Remove ambiguity about when FilerClient.GetCurrentFiler is used Benefits: - Safe for concurrent credential operations - Clear error messages for debugging - Explicit documentation of initialization order * Enable filer discovery: pass master addresses to FilerClient Fix two critical issues: 1. **Filer Discovery Not Working**: Master client was not being passed to FilerClient, so peer discovery couldn't work 2. **Credential Store Design**: Already uses FilerClient via GetCurrentFiler function - this is the correct design for HA Changes: **Command (s3.go):** - Read master addresses from GetFilerConfiguration response - Pass masterAddresses to S3ApiServerOption - Log master addresses for visibility **S3ApiServerOption:** - Add Masters []pb.ServerAddress field for discovery **S3ApiServer:** - Create MasterClient from Masters when available - Pass MasterClient + FilerGroup to FilerClient via options - Enable discovery with 5-minute refresh interval - Log whether discovery is enabled or disabled **Credential Store:** - Already correctly uses filerClient.GetCurrentFiler via function - This provides HA without tight coupling to FilerClient struct - Function-based design is clean and thread-safe Discovery Flow: 1. S3 command reads filer config → gets masters + filer group 2. S3ApiServer creates MasterClient from masters 3. FilerClient uses MasterClient to query for peer filers 4. Background goroutine refreshes peer list every 5 minutes 5. Credential store uses GetCurrentFiler to get active filer Now filer discovery actually works! �� * Use S3 endpoint in multipart Location instead of filer address * Add multi-filer failover to ReadFilerConf * Address CodeRabbit review: fix buffer reuse and improve lock safety Address two code review suggestions: 1. **Fix buffer reuse in ReadFilerConfFromFilers**: - Use local []byte data instead of shared buffer - Prevents partial data from failed attempts affecting successful reads - Creates fresh buffer inside callback for masterClient path - More robust to future changes in read helpers 2. **Improve lock safety in FilerClient**: - Add *WithHealth variants that accept health pointer - Get health pointer while holding lock, then release before calling - Eliminates potential for lock confusion (though no actual deadlock existed) - Clearer separation: lock for data access, atomics for health ops Changes: - ReadFilerConfFromFilers: var data []byte, create buf inside callback - shouldSkipUnhealthyFilerWithHealth(health *filerHealth) - recordFilerSuccessWithHealth(health *filerHealth) - recordFilerFailureWithHealth(health *filerHealth) - Keep old functions for backward compatibility (marked deprecated) - Update LookupVolumeIds to use WithHealth variants Benefits: - More robust multi-filer configuration reading - Clearer lock vs atomic operation boundaries - No lock held during health checks (even though atomics don't block) - Better code organization and maintainability * add constant * Fix IAM manager and post policy to use current active filer * Fix critical race condition and goroutine leak * Update weed/s3api/filer_multipart.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Fix compilation error and address code review suggestions Address remaining unresolved comments: 1. **Fix compilation error**: Add missing net/url import - filer_multipart.go used url.PathEscape without import - Added "net/url" to imports 2. **Fix Location URL formatting** (all 4 occurrences): - Add missing slash between bucket and key - Use url.PathEscape for bucket names - Use urlPathEscape for object keys - Handles special characters in bucket/key names - Before: http://host/bucketkey - After: http://host/bucket/key (properly escaped) 3. **Optimize discovery loop** (O(N*M) → O(N+M)): - Use map for existing filers (O(1) lookup) - Reduces time holding write lock - Better performance with many filers - Before: Nested loop for each discovered filer - After: Build map once, then O(1) lookups Changes: - filer_multipart.go: Import net/url, fix all Location URLs - filer_client.go: Use map for efficient filer discovery Benefits: - Compiles successfully - Proper URL encoding (handles spaces, special chars) - Faster discovery with less lock contention - Production-ready URL formatting * Fix race conditions and make Close() idempotent Address CodeRabbit review #3512078995: 1. **Critical: Fix unsynchronized read in error message** - Line 584 read len(fc.filerAddresses) without lock - Race with refreshFilerList appending to slice - Fixed: Take RLock to read length safely - Prevents race detector warnings 2. **Important: Make Close() idempotent** - Closing already-closed channel panics - Can happen with layered cleanup in shutdown paths - Fixed: Use sync.Once to ensure single close - Safe to call Close() multiple times now 3. **Nitpick: Add warning for empty filer address** - getFilerAddress() can return empty string - Helps diagnose unexpected state - Added: Warning log when no filers available 4. **Nitpick: Guard deprecated index-based helpers** - shouldSkipUnhealthyFiler, recordFilerSuccess/Failure - Accessed filerHealth without lock (races with discovery) - Fixed: Take RLock and check bounds before array access - Prevents index out of bounds and races Changes: - filer_client.go: - Add closeDiscoveryOnce sync.Once field - Use Do() in Close() for idempotent channel close - Add RLock guards to deprecated index-based helpers - Add bounds checking to prevent panics - Synchronized read of filerAddresses length in error - s3api_server.go: - Add warning log when getFilerAddress returns empty Benefits: - No race conditions (passes race detector) - No panic on double-close - Better error diagnostics - Safe with discovery enabled - Production-hardened shutdown logic * Fix hardcoded http scheme and add panic recovery Address CodeRabbit review #3512114811: 1. **Major: Fix hardcoded http:// scheme in Location URLs** - Location URLs always used http:// regardless of client connection - HTTPS clients got http:// URLs (incorrect) - Fixed: Detect scheme from request - Check X-Forwarded-Proto header (for proxies) first - Check r.TLS != nil for direct HTTPS - Fallback to http for plain connections - Applied to all 4 CompleteMultipartUploadResult locations 2. **Major: Add panic recovery to discovery goroutine** - Long-running background goroutine could crash entire process - Panic in refreshFilerList would terminate program - Fixed: Add defer recover() with error logging - Goroutine failures now logged, not fatal 3. **Note: Close() idempotency already implemented** - Review flagged as duplicate issue - Already fixed in commit 3d7a65c7e - sync.Once (closeDiscoveryOnce) prevents double-close panic - Safe to call Close() multiple times Changes: - filer_multipart.go: - Add getRequestScheme() helper function - Update all 4 Location URLs to use dynamic scheme - Format: scheme://host/bucket/key (was: http://...) - filer_client.go: - Add panic recovery to discoverFilers() - Log panics instead of crashing Benefits: - Correct scheme (https/http) in Location URLs - Works behind proxies (X-Forwarded-Proto) - No process crashes from discovery failures - Production-hardened background goroutine - Proper AWS S3 API compliance * Fix S3 WithFilerClient to use filer failover Critical fix for multi-filer deployments: **Problem:** - S3ApiServer.WithFilerClient() was creating direct connections to ONE filer - Used pb.WithGrpcClient() with single filer address - No failover - if that filer failed, ALL operations failed - Caused test failures: "bucket directory not found" - IAM Integration Tests failing with 500 Internal Error **Root Cause:** - WithFilerClient bypassed filerClient connection management - Always connected to getFilerAddress() (current filer only) - Didn't retry other filers on failure - All getEntry(), updateEntry(), etc. operations failed if current filer down **Solution:** 1. Added FilerClient.GetAllFilers() method - Returns snapshot of all filer addresses - Thread-safe copy to avoid races 2. Implemented withFilerClientFailover() - Try current filer first (fast path) - On failure, try all other filers - Log successful failover - Return error only if ALL filers fail 3. Updated WithFilerClient() - Use filerClient for failover when available - Fallback to direct connection for testing/init **Impact:** ✅ All S3 operations now support multi-filer failover ✅ Bucket metadata reads work with any available filer ✅ Entry operations (getEntry, updateEntry) failover automatically ✅ IAM tests should pass now ✅ Production-ready HA support **Files Changed:** - wdclient/filer_client.go: Add GetAllFilers() method - s3api/s3api_handlers.go: Implement failover logic This fixes the test failure where bucket operations failed when the primary filer was temporarily unavailable during cleanup. * Update current filer after successful failover Address code review: https://github.com/seaweedfs/seaweedfs/pull/7550#pullrequestreview-3512223723 **Issue:** After successful failover, the current filer index was not updated. This meant every subsequent request would still try the (potentially unhealthy) original filer first, then failover again. **Solution:** 1. Added FilerClient.SetCurrentFiler(addr) method: - Finds the index of specified filer address - Atomically updates filerIndex to point to it - Thread-safe with RLock 2. Call SetCurrentFiler after successful failover: - Update happens immediately after successful connection - Future requests start with the known-healthy filer - Reduces unnecessary failover attempts **Benefits:** ✅ Subsequent requests use healthy filer directly ✅ No repeated failover to same unhealthy filer ✅ Better performance - fast path hits healthy filer ✅ Comment now matches actual behavior * Integrate health tracking with S3 failover Address code review suggestion to leverage existing health tracking instead of simple iteration through all filers. **Changes:** 1. Added address-based health tracking API to FilerClient: - ShouldSkipUnhealthyFiler(addr) - check circuit breaker - RecordFilerSuccess(addr) - reset failure count - RecordFilerFailure(addr) - increment failure count These methods find the filer by address and delegate to existing *WithHealth methods for actual health management. 2. Updated withFilerClientFailover to use health tracking: - Record success/failure for every filer attempt - Skip unhealthy filers during failover (circuit breaker) - Only try filers that haven't exceeded failure threshold - Automatic re-check after reset timeout **Benefits:** ✅ Circuit breaker prevents wasting time on known-bad filers ✅ Health tracking shared across all operations ✅ Automatic recovery when unhealthy filers come back ✅ Reduced latency - skip filers in failure state ✅ Better visibility with health metrics **Behavior:** - Try current filer first (fast path) - If fails, record failure and try other HEALTHY filers - Skip filers with failureCount >= threshold (default 3) - Re-check unhealthy filers after resetTimeout (default 30s) - Record all successes/failures for health tracking * Update weed/wdclient/filer_client.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Enable filer discovery with empty filerGroup Empty filerGroup is a valid value representing the default group. The master client can discover filers even when filerGroup is empty. **Change:** - Remove the filerGroup != "" check in NewFilerClient - Keep only masterClient != nil check - Empty string will be passed to ListClusterNodes API as-is This enables filer discovery to work with the default group. --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-18S3: Directly read write volume servers (#7481)Chris Lu1-0/+109
* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization * revert Reverted the conditional versioning check to always check versioning status Reverted the conditional SSE entry fetch to always fetch entry metadata Reverted the conditional versioning check to always check versioning status Reverted the conditional SSE entry fetch to always fetch entry metadata * Lazy Entry Fetch for SSE, Skip Conditional Header Check * SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive) * SSE-C is mutually exclusive with SSE-S3 and SSE-KMS * refactor * Removed Premature Mutual Exclusivity Check * check for the presence of the X-Amz-Server-Side-Encryption header * not used * fmt * directly read write volume servers * HTTP Range Request Support * set header * md5 * copy object * fix sse * fmt * implement sse * sse continue * fixed the suffix range bug (bytes=-N for "last N bytes") * debug logs * Missing PartsCount Header * profiling * url encoding * test_multipart_get_part * headers * debug * adjust log level * handle part number * Update s3api_object_handlers.go * nil safety * set ModifiedTsNs * remove * nil check * fix sse header * same logic as filer * decode values * decode ivBase64 * s3: Fix SSE decryption JWT authentication and streaming errors Critical fix for SSE (Server-Side Encryption) test failures: 1. **JWT Authentication Bug** (Root Cause): - Changed from GenJwtForFilerServer to GenJwtForVolumeServer - S3 API now uses correct JWT when directly reading from volume servers - Matches filer's authentication pattern for direct volume access - Fixes 'unexpected EOF' and 500 errors in SSE tests 2. **Streaming Error Handling**: - Added error propagation in getEncryptedStreamFromVolumes goroutine - Use CloseWithError() to properly communicate stream failures - Added debug logging for streaming errors 3. **Response Header Timing**: - Removed premature WriteHeader(http.StatusOK) call - Let Go's http package write status automatically on first write - Prevents header lock when errors occur during streaming 4. **Enhanced SSE Decryption Debugging**: - Added IV/Key validation and logging for SSE-C, SSE-KMS, SSE-S3 - Better error messages for missing or invalid encryption metadata - Added glog.V(2) debugging for decryption setup This fixes SSE integration test failures where encrypted objects could not be retrieved due to volume server authentication failures. The JWT bug was causing volume servers to reject requests, resulting in truncated/empty streams (EOF) or internal errors. * s3: Fix SSE multipart upload metadata preservation Critical fix for SSE multipart upload test failures (SSE-C and SSE-KMS): **Root Cause - Incomplete SSE Metadata Copying**: The old code only tried to copy 'SeaweedFSSSEKMSKey' from the first part to the completed object. This had TWO bugs: 1. **Wrong Constant Name** (Key Mismatch Bug): - Storage uses: SeaweedFSSSEKMSKeyHeader = 'X-SeaweedFS-SSE-KMS-Key' - Old code read: SeaweedFSSSEKMSKey = 'x-seaweedfs-sse-kms-key' - Result: SSE-KMS metadata was NEVER copied → 500 errors 2. **Missing SSE-C and SSE-S3 Headers**: - SSE-C requires: IV, Algorithm, KeyMD5 - SSE-S3 requires: encrypted key data + standard headers - Old code: copied nothing for SSE-C/SSE-S3 → decryption failures **Fix - Complete SSE Header Preservation**: Now copies ALL SSE headers from first part to completed object: - SSE-C: SeaweedFSSSEIV, CustomerAlgorithm, CustomerKeyMD5 - SSE-KMS: SeaweedFSSSEKMSKeyHeader, AwsKmsKeyId, ServerSideEncryption - SSE-S3: SeaweedFSSSES3Key, ServerSideEncryption Applied consistently to all 3 code paths: 1. Versioned buckets (creates version file) 2. Suspended versioning (creates main object with null versionId) 3. Non-versioned buckets (creates main object) **Why This Is Correct**: The headers copied EXACTLY match what putToFiler stores during part upload (lines 496-521 in s3api_object_handlers_put.go). This ensures detectPrimarySSEType() can correctly identify encrypted multipart objects and trigger inline decryption with proper metadata. Fixes: TestSSEMultipartUploadIntegration (SSE-C and SSE-KMS subtests) * s3: Add debug logging for versioning state diagnosis Temporary debug logging to diagnose test_versioning_obj_plain_null_version_overwrite_suspended failure. Added glog.V(0) logging to show: 1. setBucketVersioningStatus: when versioning status is changed 2. PutObjectHandler: what versioning state is detected (Enabled/Suspended/none) 3. PutObjectHandler: which code path is taken (putVersionedObject vs putSuspendedVersioningObject) This will help identify if: - The versioning status is being set correctly in bucket config - The cache is returning stale/incorrect versioning state - The switch statement is correctly routing to suspended vs enabled handlers * s3: Enhanced versioning state tracing for suspended versioning diagnosis Added comprehensive logging across the entire versioning state flow: PutBucketVersioningHandler: - Log requested status (Enabled/Suspended) - Log when calling setBucketVersioningStatus - Log success/failure of status change setBucketVersioningStatus: - Log bucket and status being set - Log when config is updated - Log completion with error code updateBucketConfig: - Log versioning state being written to cache - Immediate cache verification after Set - Log if cache verification fails getVersioningState: - Log bucket name and state being returned - Log if object lock forces VersioningEnabled - Log errors This will reveal: 1. If PutBucketVersioning(Suspended) is reaching the handler 2. If the cache update succeeds 3. What state getVersioningState returns during PUT 4. Any cache consistency issues Expected to show why bucket still reports 'Enabled' after 'Suspended' call. * s3: Add SSE chunk detection debugging for multipart uploads Added comprehensive logging to diagnose why TestSSEMultipartUploadIntegration fails: detectPrimarySSEType now logs: 1. Total chunk count and extended header count 2. All extended headers with 'sse'/'SSE'/'encryption' in the name 3. For each chunk: index, SseType, and whether it has metadata 4. Final SSE type counts (SSE-C, SSE-KMS, SSE-S3) This will reveal if: - Chunks are missing SSE metadata after multipart completion - Extended headers are copied correctly from first part - The SSE detection logic is working correctly Expected to show if chunks have SseType=0 (none) or proper SSE types set. * s3: Trace SSE chunk metadata through multipart completion and retrieval Added end-to-end logging to track SSE chunk metadata lifecycle: **During Multipart Completion (filer_multipart.go)**: 1. Log finalParts chunks BEFORE mkFile - shows SseType and metadata 2. Log versionEntry.Chunks INSIDE mkFile callback - shows if mkFile preserves SSE info 3. Log success after mkFile completes **During GET Retrieval (s3api_object_handlers.go)**: 1. Log retrieved entry chunks - shows SseType and metadata after retrieval 2. Log detected SSE type result This will reveal at which point SSE chunk metadata is lost: - If finalParts have SSE metadata but versionEntry.Chunks don't → mkFile bug - If versionEntry.Chunks have SSE metadata but retrieved chunks don't → storage/retrieval bug - If chunks never have SSE metadata → multipart completion SSE processing bug Expected to show chunks with SseType=NONE during retrieval even though they were created with proper SseType during multipart completion. * s3: Fix SSE-C multipart IV base64 decoding bug **Critical Bug Found**: SSE-C multipart uploads were failing because: Root Cause: - entry.Extended[SeaweedFSSSEIV] stores base64-encoded IV (24 bytes for 16-byte IV) - SerializeSSECMetadata expects raw IV bytes (16 bytes) - During multipart completion, we were passing base64 IV directly → serialization error Error Message: "Failed to serialize SSE-C metadata for chunk in part X: invalid IV length: expected 16 bytes, got 24" Fix: - Base64-decode IV before passing to SerializeSSECMetadata - Added error handling for decode failures Impact: - SSE-C multipart uploads will now correctly serialize chunk metadata - Chunks will have proper SSE metadata for decryption during GET This fixes the SSE-C subtest of TestSSEMultipartUploadIntegration. SSE-KMS still has a separate issue (error code 23) being investigated. * fixes * kms sse * handle retry if not found in .versions folder and should read the normal object * quick check (no retries) to see if the .versions/ directory exists * skip retry if object is not found * explicit update to avoid sync delay * fix map update lock * Remove fmt.Printf debug statements * Fix SSE-KMS multipart base IV fallback to fail instead of regenerating * fmt * Fix ACL grants storage logic * header handling * nil handling * range read for sse content * test range requests for sse objects * fmt * unused code * upload in chunks * header case * fix url * bucket policy error vs bucket not found * jwt handling * fmt * jwt in request header * Optimize Case-Insensitive Prefix Check * dead code * Eliminated Unnecessary Stream Prefetch for Multipart SSE * range sse * sse * refactor * context * fmt * fix type * fix SSE-C IV Mismatch * Fix Headers Being Set After WriteHeader * fix url parsing * propergate sse headers * multipart sse-s3 * aws sig v4 authen * sse kms * set content range * better errors * Update s3api_object_handlers_copy.go * Update s3api_object_handlers.go * Update s3api_object_handlers.go * avoid magic number * clean up * Update s3api_bucket_policy_handlers.go * fix url parsing * context * data and metadata both use background context * adjust the offset * SSE Range Request IV Calculation * adjust logs * IV relative to offset in each part, not the whole file * collect logs * offset * fix offset * fix url * logs * variable * jwt * Multipart ETag semantics: conditionally set object-level Md5 for single-chunk uploads only. * sse * adjust IV and offset * multipart boundaries * ensures PUT and GET operations return consistent ETags * Metadata Header Case * CommonPrefixes Sorting with URL Encoding * always sort * remove the extra PathUnescape call * fix the multipart get part ETag * the FileChunk is created without setting ModifiedTsNs * Sort CommonPrefixes lexicographically to match AWS S3 behavior * set md5 for multipart uploads * prevents any potential data loss or corruption in the small-file inline storage path * compiles correctly * decryptedReader will now be properly closed after use * Fixed URL encoding and sort order for CommonPrefixes * Update s3api_object_handlers_list.go * SSE-x Chunk View Decryption * Different IV offset calculations for single-part vs multipart objects * still too verbose in logs * less logs * ensure correct conversion * fix listing * nil check * minor fixes * nil check * single character delimiter * optimize * range on empty object or zero-length * correct IV based on its position within that part, not its position in the entire object * adjust offset * offset Fetch FULL encrypted chunk (not just the range) Adjust IV by PartOffset/ChunkOffset only Decrypt full chunk Skip in the DECRYPTED stream to reach OffsetInChunk * look breaking * refactor * error on no content * handle intra-block byte skipping * Incomplete HTTP Response Error Handling * multipart SSE * Update s3api_object_handlers.go * address comments * less logs * handling directory * Optimized rejectDirectoryObjectWithoutSlash() to avoid unnecessary lookups * Revert "handling directory" This reverts commit 3a335f0ac33c63f51975abc63c40e5328857a74b. * constant * Consolidate nil entry checks in GetObjectHandler * add range tests * Consolidate redundant nil entry checks in HeadObjectHandler * adjust logs * SSE type * large files * large files Reverted the plain-object range test * ErrNoEncryptionConfig * Fixed SSERangeReader Infinite Loop Vulnerability * Fixed SSE-KMS Multipart ChunkReader HTTP Body Leak * handle empty directory in S3, added PyArrow tests * purge unused code * Update s3_parquet_test.py * Update requirements.txt * According to S3 specifications, when both partNumber and Range are present, the Range should apply within the selected part's boundaries, not to the full object. * handle errors * errors after writing header * https * fix: Wait for volume assignment readiness before running Parquet tests The test-implicit-dir-with-server test was failing with an Internal Error because volume assignment was not ready when tests started. This fix adds a check that attempts a volume assignment and waits for it to succeed before proceeding with tests. This ensures that: 1. Volume servers are registered with the master 2. Volume growth is triggered if needed 3. The system can successfully assign volumes for writes Fixes the timeout issue where boto3 would retry 4 times and fail with 'We encountered an internal error, please try again.' * sse tests * store derived IV * fix: Clean up gRPC ports between tests to prevent port conflicts The second test (test-implicit-dir-with-server) was failing because the volume server's gRPC port (18080 = VOLUME_PORT + 10000) was still in use from the first test. The cleanup code only killed HTTP port processes, not gRPC port processes. Added cleanup for gRPC ports in all stop targets: - Master gRPC: MASTER_PORT + 10000 (19333) - Volume gRPC: VOLUME_PORT + 10000 (18080) - Filer gRPC: FILER_PORT + 10000 (18888) This ensures clean state between test runs in CI. * add import * address comments * docs: Add placeholder documentation files for Parquet test suite Added three missing documentation files referenced in test/s3/parquet/README.md: 1. TEST_COVERAGE.md - Documents 43 total test cases (17 Go unit tests, 6 Python integration tests, 20 Python end-to-end tests) 2. FINAL_ROOT_CAUSE_ANALYSIS.md - Explains the s3fs compatibility issue with PyArrow, the implicit directory problem, and how the fix works 3. MINIO_DIRECTORY_HANDLING.md - Compares MinIO's directory handling approach with SeaweedFS's implementation Each file contains: - Title and overview - Key technical details relevant to the topic - TODO sections for future expansion These placeholder files resolve the broken README links and provide structure for future detailed documentation. * clean up if metadata operation failed * Update s3_parquet_test.py * clean up * Update Makefile * Update s3_parquet_test.py * Update Makefile * Handle ivSkip for non-block-aligned offsets * Update README.md * stop volume server faster * stop volume server in 1 second * different IV for each chunk in SSE-S3 and SSE-KMS * clean up if fails * testing upload * error propagation * fmt * simplify * fix copying * less logs * endian * Added marshaling error handling * handling invalid ranges * error handling for adding to log buffer * fix logging * avoid returning too quickly and ensure proper cleaning up * Activity Tracking for Disk Reads * Cleanup Unused Parameters * Activity Tracking for Kafka Publishers * Proper Test Error Reporting * refactoring * less logs * less logs * go fmt * guard it with if entry.Attributes.TtlSec > 0 to match the pattern used elsewhere. * Handle bucket-default encryption config errors explicitly for multipart * consistent activity tracking * obsolete code for s3 on filer read/write handlers * Update weed/s3api/s3api_object_handlers_list.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-22fix listing objects (#7008)Chris Lu1-0/+67
* fix listing objects * add more list testing * address comments * fix next marker * fix isTruncated in listing * fix tests * address tests * Update s3api_object_handlers_multipart.go * fixes * store json into bucket content, for tagging and cors * switch bucket metadata from json to proto * fix * Update s3api_bucket_config.go * fix test issue * fix test_bucket_listv2_delimiter_prefix * Update cors.go * skip special characters * passing listing * fix test_bucket_list_delimiter_prefix * ok. fix the xsd generated go code now * fix cors tests * fix test * fix test_bucket_list_unordered and test_bucket_listv2_unordered do not accept the allow-unordered and delimiter parameter combination * fix test_bucket_list_objects_anonymous and test_bucket_listv2_objects_anonymous The tests test_bucket_list_objects_anonymous and test_bucket_listv2_objects_anonymous were failing because they try to set bucket ACL to public-read, but SeaweedFS only supported private ACL. Updated PutBucketAclHandler to use the existing ExtractAcl function which already supports all standard S3 canned ACLs Replaced the hardcoded check for only private ACL with proper ACL parsing that handles public-read, public-read-write, authenticated-read, bucket-owner-read, bucket-owner-full-control, etc. Added unit tests to verify all standard canned ACLs are accepted * fix list unordered The test is expecting the error code to be InvalidArgument instead of InvalidRequest * allow anonymous listing( and head, get) * fix test_bucket_list_maxkeys_invalid Invalid values: max-keys=blah → Returns ErrInvalidMaxKeys (HTTP 400) * updating IsPublicRead when parsing acl * more logs * CORS Test Fix * fix test_bucket_list_return_data * default to private * fix test_bucket_list_delimiter_not_skip_special * default no acl * add debug logging * more logs * use basic http client remove logs also * fixes * debug * Update stats.go * debugging * fix anonymous test expectation anonymous user can read, as configured in s3 json.
2022-10-29add back "/" prefix if it is missing in objectchrislu1-0/+34
fix https://github.com/seaweedfs/seaweedfs/issues/3737
2022-08-15Remove duplicate slashes in object path to prevent 500 errors (#3442)Andrey Triumfov1-0/+48