aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorChris Lu <chrislusf@users.noreply.github.com>2025-11-26 12:07:54 -0800
committerGitHub <noreply@github.com>2025-11-26 12:07:54 -0800
commitedf0ef7a80e58333a11cf8fff5915d7a5d8ea15a (patch)
treefd7099ee9981eaf090fb049362ac005e4f5e2c8f
parent5075381060a264bab1be3763775d4914e947e814 (diff)
downloadseaweedfs-edf0ef7a80e58333a11cf8fff5915d7a5d8ea15a.tar.xz
seaweedfs-edf0ef7a80e58333a11cf8fff5915d7a5d8ea15a.zip
Filer, S3: Feature/add concurrent file upload limit (#7554)
* Support multiple filers for S3 and IAM servers with automatic failover This change adds support for multiple filer addresses in the 'weed s3' and 'weed iam' commands, enabling high availability through automatic failover. Key changes: - Updated S3ApiServerOption.Filer to Filers ([]pb.ServerAddress) - Updated IamServerOption.Filer to Filers ([]pb.ServerAddress) - Modified -filer flag to accept comma-separated addresses - Added getFilerAddress() helper methods for backward compatibility - Updated all filer client calls to support multiple addresses - Uses pb.WithOneOfGrpcFilerClients for automatic failover Usage: weed s3 -filer=localhost:8888,localhost:8889 weed iam -filer=localhost:8888,localhost:8889 The underlying FilerClient already supported multiple filers with health tracking and automatic failover - this change exposes that capability through the command-line interface. * Add filer discovery: treat initial filers as seeds and discover peers from master Enhances FilerClient to automatically discover additional filers in the same filer group by querying the master server. This allows users to specify just a few seed filers, and the client will discover all other filers in the cluster. Key changes to wdclient/FilerClient: - Added MasterClient, FilerGroup, and DiscoveryInterval fields - Added thread-safe filer list management with RWMutex - Implemented discoverFilers() background goroutine - Uses cluster.ListExistingPeerUpdates() to query master for filers - Automatically adds newly discovered filers to the list - Added Close() method to clean up discovery goroutine New FilerClientOption fields: - MasterClient: enables filer discovery from master - FilerGroup: specifies which filer group to discover - DiscoveryInterval: how often to refresh (default 5 minutes) Usage example: masterClient := wdclient.NewMasterClient(...) filerClient := wdclient.NewFilerClient( []pb.ServerAddress{"localhost:8888"}, // seed filers grpcDialOption, dataCenter, &wdclient.FilerClientOption{ MasterClient: masterClient, FilerGroup: "my-group", }, ) defer filerClient.Close() The initial filers act as seeds - the client discovers and adds all other filers in the same group from the master. Discovered filers are added dynamically without removing existing ones (relying on health checks for unavailable filers). * Address PR review comments: implement full failover for IAM operations Critical fixes based on code review feedback: 1. **IAM API Failover (Critical)**: - Replace pb.WithGrpcFilerClient with pb.WithOneOfGrpcFilerClients in: * GetS3ApiConfigurationFromFiler() * PutS3ApiConfigurationToFiler() * GetPolicies() * PutPolicies() - Now all IAM operations support automatic failover across multiple filers 2. **Validation Improvements**: - Add validation in NewIamApiServerWithStore() to require at least one filer - Add validation in NewS3ApiServerWithStore() to require at least one filer - Add warning log when no filers configured for credential store 3. **Error Logging**: - Circuit breaker now logs when config load fails instead of silently ignoring - Helps operators understand why circuit breaker limits aren't applied 4. **Code Quality**: - Use ToGrpcAddress() for filer address in credential store setup - More consistent with rest of codebase and future-proof These changes ensure IAM operations have the same high availability guarantees as S3 operations, completing the multi-filer failover implementation. * Fix IAM manager initialization: remove code duplication, add TODO for HA Addresses review comment on s3api_server.go:145 Changes: - Remove duplicate code for getting first filer address - Extract filerAddr variable once and reuse - Add TODO comment documenting the HA limitation for IAM manager - Document that loadIAMManagerFromConfig and NewS3IAMIntegration need updates to support multiple filers for full HA Note: This is a known limitation when using filer-backed IAM stores. The interfaces need to be updated to accept multiple filer addresses. For now, documenting this limitation clearly. * Document credential store HA limitation with TODO Addresses review comment on auth_credentials.go:149 Changes: - Add TODO comment documenting that SetFilerClient interface needs update for multi-filer support - Add informative log message indicating HA limitation - Document that this is a known limitation for filer-backed credential stores The SetFilerClient interface currently only accepts a single filer address. To properly support HA, the credential store interfaces need to be updated to handle multiple filer addresses. * Track current active filer in FilerClient for better HA Add GetCurrentFiler() method to FilerClient that returns the currently active filer based on the filerIndex which is updated on successful operations. This provides better availability than always using the first filer. Changes: - Add FilerClient.GetCurrentFiler() method that returns current active filer - Update S3ApiServer.getFilerAddress() to use FilerClient's current filer - Add fallback to first filer if FilerClient not yet initialized - Document IAM limitation (doesn't have FilerClient access) Benefits: - Single-filer operations (URLs, ReadFilerConf, etc.) now use the currently active/healthy filer - Better distribution and failover behavior - FilerClient's round-robin and health tracking automatically determines which filer to use * Document ReadFilerConf HA limitation in lifecycle handlers Addresses review comment on s3api_bucket_handlers.go:880 Add comment documenting that ReadFilerConf uses the current active filer from FilerClient (which is better than always using first filer), but doesn't have built-in multi-filer failover. Add TODO to update filer.ReadFilerConf to support multiple filers for complete HA. For now, it uses the currently active/healthy filer tracked by FilerClient which provides reasonable availability. * Document multipart upload URL HA limitation Addresses review comment on s3api_object_handlers_multipart.go:442 Add comment documenting that part upload URLs point to the current active filer (tracked by FilerClient), which is better than always using the first filer but still creates a potential point of failure if that filer becomes unavailable during upload. Suggest TODO solutions: - Use virtual hostname/load balancer for filers - Have S3 server proxy uploads to healthy filers Current behavior provides reasonable availability by using the currently active/healthy filer rather than being pinned to first filer. * Document multipart completion Location URL limitation Addresses review comment on filer_multipart.go:187 Add comment documenting that the Location URL in CompleteMultipartUpload response points to the current active filer (tracked by FilerClient). Note that clients should ideally use the S3 API endpoint rather than this direct URL. If direct access is attempted and the specific filer is unavailable, the request will fail. Current behavior uses the currently active/healthy filer rather than being pinned to the first filer, providing better availability. * Make credential store use current active filer for HA Update FilerEtcStore to use a function that returns the current active filer instead of a fixed address, enabling high availability. Changes: - Add SetFilerAddressFunc() method to FilerEtcStore - Store uses filerAddressFunc instead of fixed filerGrpcAddress - withFilerClient() calls the function to get current active filer - Keep SetFilerClient() for backward compatibility (marked deprecated) - Update S3ApiServer to pass FilerClient.GetCurrentFiler to store Benefits: - Credential store now uses currently active/healthy filer - Automatic failover when filer becomes unavailable - True HA for credential operations - Backward compatible with old SetFilerClient interface This addresses the credential store limitation - no longer pinned to first filer, uses FilerClient's tracked current active filer. * Clarify multipart URL comments: filer address not used for uploads Update comments to reflect that multipart upload URLs are not actually used for upload traffic - uploads go directly to volume servers. Key clarifications: - genPartUploadUrl: Filer address is parsed out, only path is used - CompleteMultipartUpload Location: Informational field per AWS S3 spec - Actual uploads bypass filer proxy and go directly to volume servers The filer address in these URLs is NOT a HA concern because: 1. Part uploads: URL is parsed for path, upload goes to volume servers 2. Location URL: Informational only, clients use S3 endpoint This addresses the observation that S3 uploads don't go through filers, only metadata operations do. * Remove filer address from upload paths - pass path directly Eliminate unnecessary filer address from upload URLs by passing file paths directly instead of full URLs that get immediately parsed. Changes: - Rename genPartUploadUrl() → genPartUploadPath() (returns path only) - Rename toFilerUrl() → toFilerPath() (returns path only) - Update putToFiler() to accept filePath instead of uploadUrl - Remove URL parsing code (no longer needed) - Remove net/url import (no longer used) - Keep old function names as deprecated wrappers for compatibility Benefits: - Cleaner code - no fake URL construction/parsing - No dependency on filer address for internal operations - More accurate naming (these are paths, not URLs) - Eliminates confusion about HA concerns This completely removes the filer address from upload operations - it was never actually used for routing, only parsed for the path. * Remove deprecated functions: use new path-based functions directly Remove deprecated wrapper functions and update all callers to use the new function names directly. Removed: - genPartUploadUrl() → all callers now use genPartUploadPath() - toFilerUrl() → all callers now use toFilerPath() - SetFilerClient() → removed along with fallback code Updated: - s3api_object_handlers_multipart.go: uploadUrl → filePath - s3api_object_handlers_put.go: uploadUrl → filePath, versionUploadUrl → versionFilePath - s3api_object_versioning.go: toFilerUrl → toFilerPath - s3api_object_handlers_test.go: toFilerUrl → toFilerPath - auth_credentials.go: removed SetFilerClient fallback - filer_etc_store.go: removed deprecated SetFilerClient method Benefits: - Cleaner codebase with no deprecated functions - All variable names accurately reflect that they're paths, not URLs - Single interface for credential stores (SetFilerAddressFunc only) All code now consistently uses the new path-based approach. * Fix toFilerPath: remove URL escaping for raw file paths The toFilerPath function should return raw file paths, not URL-escaped paths. URL escaping was needed when the path was embedded in a URL (old toFilerUrl), but now that we pass paths directly to putToFiler, they should be unescaped. This fixes S3 integration test failures: - test_bucket_listv2_encoding_basic - test_bucket_list_encoding_basic - test_bucket_listv2_delimiter_whitespace - test_bucket_list_delimiter_whitespace The tests were failing because paths were double-encoded (escaped when stored, then escaped again when listed), resulting in %252B instead of %2B for '+' characters. Root cause: When we removed URL parsing in putToFiler, we should have also removed URL escaping in toFilerPath since paths are now used directly without URL encoding/decoding. * Add thread safety to FilerEtcStore and clarify credential store comments Address review suggestions for better thread safety and code clarity: 1. **Thread Safety**: Add RWMutex to FilerEtcStore - Protects filerAddressFunc and grpcDialOption from concurrent access - Initialize() uses write lock when setting function - SetFilerAddressFunc() uses write lock - withFilerClient() uses read lock to get function and dial option - GetPolicies() uses read lock to check if configured 2. **Improved Error Messages**: - Prefix errors with "filer_etc:" for easier debugging - "filer address not configured" → "filer_etc: filer address function not configured" - "filer address is empty" → "filer_etc: filer address is empty" 3. **Clarified Comments**: - auth_credentials.go: Clarify that initial setup is temporary - Document that it's updated in s3api_server.go after FilerClient creation - Remove ambiguity about when FilerClient.GetCurrentFiler is used Benefits: - Safe for concurrent credential operations - Clear error messages for debugging - Explicit documentation of initialization order * Enable filer discovery: pass master addresses to FilerClient Fix two critical issues: 1. **Filer Discovery Not Working**: Master client was not being passed to FilerClient, so peer discovery couldn't work 2. **Credential Store Design**: Already uses FilerClient via GetCurrentFiler function - this is the correct design for HA Changes: **Command (s3.go):** - Read master addresses from GetFilerConfiguration response - Pass masterAddresses to S3ApiServerOption - Log master addresses for visibility **S3ApiServerOption:** - Add Masters []pb.ServerAddress field for discovery **S3ApiServer:** - Create MasterClient from Masters when available - Pass MasterClient + FilerGroup to FilerClient via options - Enable discovery with 5-minute refresh interval - Log whether discovery is enabled or disabled **Credential Store:** - Already correctly uses filerClient.GetCurrentFiler via function - This provides HA without tight coupling to FilerClient struct - Function-based design is clean and thread-safe Discovery Flow: 1. S3 command reads filer config → gets masters + filer group 2. S3ApiServer creates MasterClient from masters 3. FilerClient uses MasterClient to query for peer filers 4. Background goroutine refreshes peer list every 5 minutes 5. Credential store uses GetCurrentFiler to get active filer Now filer discovery actually works! �� * Use S3 endpoint in multipart Location instead of filer address * Add multi-filer failover to ReadFilerConf * Address CodeRabbit review: fix buffer reuse and improve lock safety Address two code review suggestions: 1. **Fix buffer reuse in ReadFilerConfFromFilers**: - Use local []byte data instead of shared buffer - Prevents partial data from failed attempts affecting successful reads - Creates fresh buffer inside callback for masterClient path - More robust to future changes in read helpers 2. **Improve lock safety in FilerClient**: - Add *WithHealth variants that accept health pointer - Get health pointer while holding lock, then release before calling - Eliminates potential for lock confusion (though no actual deadlock existed) - Clearer separation: lock for data access, atomics for health ops Changes: - ReadFilerConfFromFilers: var data []byte, create buf inside callback - shouldSkipUnhealthyFilerWithHealth(health *filerHealth) - recordFilerSuccessWithHealth(health *filerHealth) - recordFilerFailureWithHealth(health *filerHealth) - Keep old functions for backward compatibility (marked deprecated) - Update LookupVolumeIds to use WithHealth variants Benefits: - More robust multi-filer configuration reading - Clearer lock vs atomic operation boundaries - No lock held during health checks (even though atomics don't block) - Better code organization and maintainability * add constant * Fix IAM manager and post policy to use current active filer * Fix critical race condition and goroutine leak * Update weed/s3api/filer_multipart.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Fix compilation error and address code review suggestions Address remaining unresolved comments: 1. **Fix compilation error**: Add missing net/url import - filer_multipart.go used url.PathEscape without import - Added "net/url" to imports 2. **Fix Location URL formatting** (all 4 occurrences): - Add missing slash between bucket and key - Use url.PathEscape for bucket names - Use urlPathEscape for object keys - Handles special characters in bucket/key names - Before: http://host/bucketkey - After: http://host/bucket/key (properly escaped) 3. **Optimize discovery loop** (O(N*M) → O(N+M)): - Use map for existing filers (O(1) lookup) - Reduces time holding write lock - Better performance with many filers - Before: Nested loop for each discovered filer - After: Build map once, then O(1) lookups Changes: - filer_multipart.go: Import net/url, fix all Location URLs - filer_client.go: Use map for efficient filer discovery Benefits: - Compiles successfully - Proper URL encoding (handles spaces, special chars) - Faster discovery with less lock contention - Production-ready URL formatting * Fix race conditions and make Close() idempotent Address CodeRabbit review #3512078995: 1. **Critical: Fix unsynchronized read in error message** - Line 584 read len(fc.filerAddresses) without lock - Race with refreshFilerList appending to slice - Fixed: Take RLock to read length safely - Prevents race detector warnings 2. **Important: Make Close() idempotent** - Closing already-closed channel panics - Can happen with layered cleanup in shutdown paths - Fixed: Use sync.Once to ensure single close - Safe to call Close() multiple times now 3. **Nitpick: Add warning for empty filer address** - getFilerAddress() can return empty string - Helps diagnose unexpected state - Added: Warning log when no filers available 4. **Nitpick: Guard deprecated index-based helpers** - shouldSkipUnhealthyFiler, recordFilerSuccess/Failure - Accessed filerHealth without lock (races with discovery) - Fixed: Take RLock and check bounds before array access - Prevents index out of bounds and races Changes: - filer_client.go: - Add closeDiscoveryOnce sync.Once field - Use Do() in Close() for idempotent channel close - Add RLock guards to deprecated index-based helpers - Add bounds checking to prevent panics - Synchronized read of filerAddresses length in error - s3api_server.go: - Add warning log when getFilerAddress returns empty Benefits: - No race conditions (passes race detector) - No panic on double-close - Better error diagnostics - Safe with discovery enabled - Production-hardened shutdown logic * Fix hardcoded http scheme and add panic recovery Address CodeRabbit review #3512114811: 1. **Major: Fix hardcoded http:// scheme in Location URLs** - Location URLs always used http:// regardless of client connection - HTTPS clients got http:// URLs (incorrect) - Fixed: Detect scheme from request - Check X-Forwarded-Proto header (for proxies) first - Check r.TLS != nil for direct HTTPS - Fallback to http for plain connections - Applied to all 4 CompleteMultipartUploadResult locations 2. **Major: Add panic recovery to discovery goroutine** - Long-running background goroutine could crash entire process - Panic in refreshFilerList would terminate program - Fixed: Add defer recover() with error logging - Goroutine failures now logged, not fatal 3. **Note: Close() idempotency already implemented** - Review flagged as duplicate issue - Already fixed in commit 3d7a65c7e - sync.Once (closeDiscoveryOnce) prevents double-close panic - Safe to call Close() multiple times Changes: - filer_multipart.go: - Add getRequestScheme() helper function - Update all 4 Location URLs to use dynamic scheme - Format: scheme://host/bucket/key (was: http://...) - filer_client.go: - Add panic recovery to discoverFilers() - Log panics instead of crashing Benefits: - Correct scheme (https/http) in Location URLs - Works behind proxies (X-Forwarded-Proto) - No process crashes from discovery failures - Production-hardened background goroutine - Proper AWS S3 API compliance * filer: add ConcurrentFileUploadLimit option to limit number of concurrent uploads This adds a new configuration option ConcurrentFileUploadLimit that limits the number of concurrent file uploads based on file count, complementing the existing ConcurrentUploadLimit which limits based on total data size. This addresses an OOM vulnerability where requests with missing/zero Content-Length headers could bypass the size-based rate limiter. Changes: - Add ConcurrentFileUploadLimit field to FilerOption - Add inFlightUploads counter to FilerServer - Update upload handler to check both size and count limits - Add -concurrentFileUploadLimit command line flag (default: 0 = unlimited) Fixes #7529 * s3: add ConcurrentFileUploadLimit option to limit number of concurrent uploads This adds a new configuration option ConcurrentFileUploadLimit that limits the number of concurrent file uploads based on file count, complementing the existing ConcurrentUploadLimit which limits based on total data size. This addresses an OOM vulnerability where requests with missing/zero Content-Length headers could bypass the size-based rate limiter. Changes: - Add ConcurrentUploadLimit and ConcurrentFileUploadLimit fields to S3ApiServerOption - Add inFlightDataSize, inFlightUploads, and inFlightDataLimitCond to S3ApiServer - Add s3a reference to CircuitBreaker for upload limiting - Enhance CircuitBreaker.Limit() to apply upload limiting for write actions - Add -concurrentUploadLimitMB and -concurrentFileUploadLimit command line flags - Add s3.concurrentUploadLimitMB and s3.concurrentFileUploadLimit flags to filer command The upload limiting is integrated into the existing CircuitBreaker.Limit() function, avoiding creation of new wrapper functions and reusing the existing handler registration pattern. Fixes #7529 * server: add missing concurrentFileUploadLimit flags for server command The server command was missing the initialization of concurrentFileUploadLimit flags for both filer and S3, causing a nil pointer dereference when starting the server in combined mode. This adds: - filer.concurrentFileUploadLimit flag to server command - s3.concurrentUploadLimitMB flag to server command - s3.concurrentFileUploadLimit flag to server command Fixes the panic: runtime error: invalid memory address or nil pointer dereference at filer.go:332 * http status 503 --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
-rw-r--r--weed/command/filer.go107
-rw-r--r--weed/command/s3.go6
-rw-r--r--weed/command/server.go3
-rw-r--r--weed/s3api/s3api_circuit_breaker.go43
-rw-r--r--weed/s3api/s3api_server.go52
-rw-r--r--weed/s3api/s3err/s3api_errors.go4
-rw-r--r--weed/server/filer_server.go46
-rw-r--r--weed/server/filer_server_handlers.go18
8 files changed, 181 insertions, 98 deletions
diff --git a/weed/command/filer.go b/weed/command/filer.go
index 053c5a147..86991a181 100644
--- a/weed/command/filer.go
+++ b/weed/command/filer.go
@@ -42,38 +42,39 @@ var (
)
type FilerOptions struct {
- masters *pb.ServerDiscovery
- mastersString *string
- ip *string
- bindIp *string
- port *int
- portGrpc *int
- publicPort *int
- filerGroup *string
- collection *string
- defaultReplicaPlacement *string
- disableDirListing *bool
- maxMB *int
- dirListingLimit *int
- dataCenter *string
- rack *string
- enableNotification *bool
- disableHttp *bool
- cipher *bool
- metricsHttpPort *int
- metricsHttpIp *string
- saveToFilerLimit *int
- defaultLevelDbDirectory *string
- concurrentUploadLimitMB *int
- debug *bool
- debugPort *int
- localSocket *string
- showUIDirectoryDelete *bool
- downloadMaxMBps *int
- diskType *string
- allowedOrigins *string
- exposeDirectoryData *bool
- certProvider certprovider.Provider
+ masters *pb.ServerDiscovery
+ mastersString *string
+ ip *string
+ bindIp *string
+ port *int
+ portGrpc *int
+ publicPort *int
+ filerGroup *string
+ collection *string
+ defaultReplicaPlacement *string
+ disableDirListing *bool
+ maxMB *int
+ dirListingLimit *int
+ dataCenter *string
+ rack *string
+ enableNotification *bool
+ disableHttp *bool
+ cipher *bool
+ metricsHttpPort *int
+ metricsHttpIp *string
+ saveToFilerLimit *int
+ defaultLevelDbDirectory *string
+ concurrentUploadLimitMB *int
+ concurrentFileUploadLimit *int
+ debug *bool
+ debugPort *int
+ localSocket *string
+ showUIDirectoryDelete *bool
+ downloadMaxMBps *int
+ diskType *string
+ allowedOrigins *string
+ exposeDirectoryData *bool
+ certProvider certprovider.Provider
}
func init() {
@@ -99,6 +100,7 @@ func init() {
f.saveToFilerLimit = cmdFiler.Flag.Int("saveToFilerLimit", 0, "files smaller than this limit will be saved in filer store")
f.defaultLevelDbDirectory = cmdFiler.Flag.String("defaultStoreDir", ".", "if filer.toml is empty, use an embedded filer store in the directory")
f.concurrentUploadLimitMB = cmdFiler.Flag.Int("concurrentUploadLimitMB", 128, "limit total concurrent upload size")
+ f.concurrentFileUploadLimit = cmdFiler.Flag.Int("concurrentFileUploadLimit", 0, "limit number of concurrent file uploads, 0 means unlimited")
f.debug = cmdFiler.Flag.Bool("debug", false, "serves runtime profiling data, e.g., http://localhost:<debug.port>/debug/pprof/goroutine?debug=2")
f.debugPort = cmdFiler.Flag.Int("debug.port", 6060, "http port for debugging")
f.localSocket = cmdFiler.Flag.String("localSocket", "", "default to /tmp/seaweedfs-filer-<port>.sock")
@@ -127,6 +129,8 @@ func init() {
filerS3Options.tlsVerifyClientCert = cmdFiler.Flag.Bool("s3.tlsVerifyClientCert", false, "whether to verify the client's certificate")
filerS3Options.bindIp = cmdFiler.Flag.String("s3.ip.bind", "", "ip address to bind to. If empty, default to same as -ip.bind option.")
filerS3Options.idleTimeout = cmdFiler.Flag.Int("s3.idleTimeout", 10, "connection idle seconds")
+ filerS3Options.concurrentUploadLimitMB = cmdFiler.Flag.Int("s3.concurrentUploadLimitMB", 128, "limit total concurrent upload size for S3")
+ filerS3Options.concurrentFileUploadLimit = cmdFiler.Flag.Int("s3.concurrentFileUploadLimit", 0, "limit number of concurrent file uploads for S3, 0 means unlimited")
// start webdav on filer
filerStartWebDav = cmdFiler.Flag.Bool("webdav", false, "whether to start webdav gateway")
@@ -310,25 +314,26 @@ func (fo *FilerOptions) startFiler() {
filerAddress := pb.NewServerAddress(*fo.ip, *fo.port, *fo.portGrpc)
fs, nfs_err := weed_server.NewFilerServer(defaultMux, publicVolumeMux, &weed_server.FilerOption{
- Masters: fo.masters,
- FilerGroup: *fo.filerGroup,
- Collection: *fo.collection,
- DefaultReplication: *fo.defaultReplicaPlacement,
- DisableDirListing: *fo.disableDirListing,
- MaxMB: *fo.maxMB,
- DirListingLimit: *fo.dirListingLimit,
- DataCenter: *fo.dataCenter,
- Rack: *fo.rack,
- DefaultLevelDbDir: defaultLevelDbDirectory,
- DisableHttp: *fo.disableHttp,
- Host: filerAddress,
- Cipher: *fo.cipher,
- SaveToFilerLimit: int64(*fo.saveToFilerLimit),
- ConcurrentUploadLimit: int64(*fo.concurrentUploadLimitMB) * 1024 * 1024,
- ShowUIDirectoryDelete: *fo.showUIDirectoryDelete,
- DownloadMaxBytesPs: int64(*fo.downloadMaxMBps) * 1024 * 1024,
- DiskType: *fo.diskType,
- AllowedOrigins: strings.Split(*fo.allowedOrigins, ","),
+ Masters: fo.masters,
+ FilerGroup: *fo.filerGroup,
+ Collection: *fo.collection,
+ DefaultReplication: *fo.defaultReplicaPlacement,
+ DisableDirListing: *fo.disableDirListing,
+ MaxMB: *fo.maxMB,
+ DirListingLimit: *fo.dirListingLimit,
+ DataCenter: *fo.dataCenter,
+ Rack: *fo.rack,
+ DefaultLevelDbDir: defaultLevelDbDirectory,
+ DisableHttp: *fo.disableHttp,
+ Host: filerAddress,
+ Cipher: *fo.cipher,
+ SaveToFilerLimit: int64(*fo.saveToFilerLimit),
+ ConcurrentUploadLimit: int64(*fo.concurrentUploadLimitMB) * 1024 * 1024,
+ ConcurrentFileUploadLimit: int64(*fo.concurrentFileUploadLimit),
+ ShowUIDirectoryDelete: *fo.showUIDirectoryDelete,
+ DownloadMaxBytesPs: int64(*fo.downloadMaxMBps) * 1024 * 1024,
+ DiskType: *fo.diskType,
+ AllowedOrigins: strings.Split(*fo.allowedOrigins, ","),
})
if nfs_err != nil {
glog.Fatalf("Filer startup error: %v", nfs_err)
diff --git a/weed/command/s3.go b/weed/command/s3.go
index 995d15f8a..61222336b 100644
--- a/weed/command/s3.go
+++ b/weed/command/s3.go
@@ -57,6 +57,8 @@ type S3Options struct {
localSocket *string
certProvider certprovider.Provider
idleTimeout *int
+ concurrentUploadLimitMB *int
+ concurrentFileUploadLimit *int
}
func init() {
@@ -83,6 +85,8 @@ func init() {
s3StandaloneOptions.localFilerSocket = cmdS3.Flag.String("localFilerSocket", "", "local filer socket path")
s3StandaloneOptions.localSocket = cmdS3.Flag.String("localSocket", "", "default to /tmp/seaweedfs-s3-<port>.sock")
s3StandaloneOptions.idleTimeout = cmdS3.Flag.Int("idleTimeout", 10, "connection idle seconds")
+ s3StandaloneOptions.concurrentUploadLimitMB = cmdS3.Flag.Int("concurrentUploadLimitMB", 128, "limit total concurrent upload size")
+ s3StandaloneOptions.concurrentFileUploadLimit = cmdS3.Flag.Int("concurrentFileUploadLimit", 0, "limit number of concurrent file uploads, 0 means unlimited")
}
var cmdS3 = &Command{
@@ -275,6 +279,8 @@ func (s3opt *S3Options) startS3Server() bool {
DataCenter: *s3opt.dataCenter,
FilerGroup: filerGroup,
IamConfig: iamConfigPath, // Advanced IAM config (optional)
+ ConcurrentUploadLimit: int64(*s3opt.concurrentUploadLimitMB) * 1024 * 1024,
+ ConcurrentFileUploadLimit: int64(*s3opt.concurrentFileUploadLimit),
})
if s3ApiServer_err != nil {
glog.Fatalf("S3 API Server startup error: %v", s3ApiServer_err)
diff --git a/weed/command/server.go b/weed/command/server.go
index 3cdde48c6..47df30fc2 100644
--- a/weed/command/server.go
+++ b/weed/command/server.go
@@ -123,6 +123,7 @@ func init() {
filerOptions.cipher = cmdServer.Flag.Bool("filer.encryptVolumeData", false, "encrypt data on volume servers")
filerOptions.saveToFilerLimit = cmdServer.Flag.Int("filer.saveToFilerLimit", 0, "Small files smaller than this limit can be cached in filer store.")
filerOptions.concurrentUploadLimitMB = cmdServer.Flag.Int("filer.concurrentUploadLimitMB", 64, "limit total concurrent upload size")
+ filerOptions.concurrentFileUploadLimit = cmdServer.Flag.Int("filer.concurrentFileUploadLimit", 0, "limit number of concurrent file uploads, 0 means unlimited")
filerOptions.localSocket = cmdServer.Flag.String("filer.localSocket", "", "default to /tmp/seaweedfs-filer-<port>.sock")
filerOptions.showUIDirectoryDelete = cmdServer.Flag.Bool("filer.ui.deleteDir", true, "enable filer UI show delete directory button")
filerOptions.downloadMaxMBps = cmdServer.Flag.Int("filer.downloadMaxMBps", 0, "download max speed for each download request, in MB per second")
@@ -168,6 +169,8 @@ func init() {
s3Options.localSocket = cmdServer.Flag.String("s3.localSocket", "", "default to /tmp/seaweedfs-s3-<port>.sock")
s3Options.bindIp = cmdServer.Flag.String("s3.ip.bind", "", "ip address to bind to. If empty, default to same as -ip.bind option.")
s3Options.idleTimeout = cmdServer.Flag.Int("s3.idleTimeout", 10, "connection idle seconds")
+ s3Options.concurrentUploadLimitMB = cmdServer.Flag.Int("s3.concurrentUploadLimitMB", 128, "limit total concurrent upload size for S3")
+ s3Options.concurrentFileUploadLimit = cmdServer.Flag.Int("s3.concurrentFileUploadLimit", 0, "limit number of concurrent file uploads for S3, 0 means unlimited")
sftpOptions.port = cmdServer.Flag.Int("sftp.port", 2022, "SFTP server listen port")
sftpOptions.sshPrivateKey = cmdServer.Flag.String("sftp.sshPrivateKey", "", "path to the SSH private key file for host authentication")
diff --git a/weed/s3api/s3api_circuit_breaker.go b/weed/s3api/s3api_circuit_breaker.go
index 2f5e1f580..3c4f55a23 100644
--- a/weed/s3api/s3api_circuit_breaker.go
+++ b/weed/s3api/s3api_circuit_breaker.go
@@ -21,6 +21,7 @@ type CircuitBreaker struct {
Enabled bool
counters map[string]*int64
limitations map[string]int64
+ s3a *S3ApiServer
}
func NewCircuitBreaker(option *S3ApiServerOption) *CircuitBreaker {
@@ -89,6 +90,48 @@ func (cb *CircuitBreaker) loadCircuitBreakerConfig(cfg *s3_pb.S3CircuitBreakerCo
func (cb *CircuitBreaker) Limit(f func(w http.ResponseWriter, r *http.Request), action string) (http.HandlerFunc, Action) {
return func(w http.ResponseWriter, r *http.Request) {
+ // Apply upload limiting for write actions if configured
+ if cb.s3a != nil && (action == s3_constants.ACTION_WRITE) &&
+ (cb.s3a.option.ConcurrentUploadLimit != 0 || cb.s3a.option.ConcurrentFileUploadLimit != 0) {
+
+ // Get content length, default to 0 if not provided
+ contentLength := r.ContentLength
+ if contentLength < 0 {
+ contentLength = 0
+ }
+
+ // Wait until in flight data is less than the limit
+ cb.s3a.inFlightDataLimitCond.L.Lock()
+ inFlightDataSize := atomic.LoadInt64(&cb.s3a.inFlightDataSize)
+ inFlightUploads := atomic.LoadInt64(&cb.s3a.inFlightUploads)
+
+ // Wait if either data size limit or file count limit is exceeded
+ for (cb.s3a.option.ConcurrentUploadLimit != 0 && inFlightDataSize > cb.s3a.option.ConcurrentUploadLimit) ||
+ (cb.s3a.option.ConcurrentFileUploadLimit != 0 && inFlightUploads >= cb.s3a.option.ConcurrentFileUploadLimit) {
+ if (cb.s3a.option.ConcurrentUploadLimit != 0 && inFlightDataSize > cb.s3a.option.ConcurrentUploadLimit) {
+ glog.V(4).Infof("wait because inflight data %d > %d", inFlightDataSize, cb.s3a.option.ConcurrentUploadLimit)
+ }
+ if (cb.s3a.option.ConcurrentFileUploadLimit != 0 && inFlightUploads >= cb.s3a.option.ConcurrentFileUploadLimit) {
+ glog.V(4).Infof("wait because inflight uploads %d >= %d", inFlightUploads, cb.s3a.option.ConcurrentFileUploadLimit)
+ }
+ cb.s3a.inFlightDataLimitCond.Wait()
+ inFlightDataSize = atomic.LoadInt64(&cb.s3a.inFlightDataSize)
+ inFlightUploads = atomic.LoadInt64(&cb.s3a.inFlightUploads)
+ }
+ cb.s3a.inFlightDataLimitCond.L.Unlock()
+
+ // Increment counters
+ atomic.AddInt64(&cb.s3a.inFlightUploads, 1)
+ atomic.AddInt64(&cb.s3a.inFlightDataSize, contentLength)
+ defer func() {
+ // Decrement counters
+ atomic.AddInt64(&cb.s3a.inFlightUploads, -1)
+ atomic.AddInt64(&cb.s3a.inFlightDataSize, -contentLength)
+ cb.s3a.inFlightDataLimitCond.Signal()
+ }()
+ }
+
+ // Apply circuit breaker logic
if !cb.Enabled {
f(w, r)
return
diff --git a/weed/s3api/s3api_server.go b/weed/s3api/s3api_server.go
index dcf3a55f2..a1a3f100b 100644
--- a/weed/s3api/s3api_server.go
+++ b/weed/s3api/s3api_server.go
@@ -9,6 +9,7 @@ import (
"os"
"slices"
"strings"
+ "sync"
"time"
"github.com/gorilla/mux"
@@ -48,22 +49,27 @@ type S3ApiServerOption struct {
DataCenter string
FilerGroup string
IamConfig string // Advanced IAM configuration file path
+ ConcurrentUploadLimit int64
+ ConcurrentFileUploadLimit int64
}
type S3ApiServer struct {
s3_pb.UnimplementedSeaweedS3Server
- option *S3ApiServerOption
- iam *IdentityAccessManagement
- iamIntegration *S3IAMIntegration // Advanced IAM integration for JWT authentication
- cb *CircuitBreaker
- randomClientId int32
- filerGuard *security.Guard
- filerClient *wdclient.FilerClient
- client util_http_client.HTTPClientInterface
- bucketRegistry *BucketRegistry
- credentialManager *credential.CredentialManager
- bucketConfigCache *BucketConfigCache
- policyEngine *BucketPolicyEngine // Engine for evaluating bucket policies
+ option *S3ApiServerOption
+ iam *IdentityAccessManagement
+ iamIntegration *S3IAMIntegration // Advanced IAM integration for JWT authentication
+ cb *CircuitBreaker
+ randomClientId int32
+ filerGuard *security.Guard
+ filerClient *wdclient.FilerClient
+ client util_http_client.HTTPClientInterface
+ bucketRegistry *BucketRegistry
+ credentialManager *credential.CredentialManager
+ bucketConfigCache *BucketConfigCache
+ policyEngine *BucketPolicyEngine // Engine for evaluating bucket policies
+ inFlightDataSize int64
+ inFlightUploads int64
+ inFlightDataLimitCond *sync.Cond
}
func NewS3ApiServer(router *mux.Router, option *S3ApiServerOption) (s3ApiServer *S3ApiServer, err error) {
@@ -135,17 +141,21 @@ func NewS3ApiServerWithStore(router *mux.Router, option *S3ApiServerOption, expl
}
s3ApiServer = &S3ApiServer{
- option: option,
- iam: iam,
- randomClientId: util.RandomInt32(),
- filerGuard: security.NewGuard([]string{}, signingKey, expiresAfterSec, readSigningKey, readExpiresAfterSec),
- filerClient: filerClient,
- cb: NewCircuitBreaker(option),
- credentialManager: iam.credentialManager,
- bucketConfigCache: NewBucketConfigCache(60 * time.Minute), // Increased TTL since cache is now event-driven
- policyEngine: policyEngine, // Initialize bucket policy engine
+ option: option,
+ iam: iam,
+ randomClientId: util.RandomInt32(),
+ filerGuard: security.NewGuard([]string{}, signingKey, expiresAfterSec, readSigningKey, readExpiresAfterSec),
+ filerClient: filerClient,
+ cb: NewCircuitBreaker(option),
+ credentialManager: iam.credentialManager,
+ bucketConfigCache: NewBucketConfigCache(60 * time.Minute), // Increased TTL since cache is now event-driven
+ policyEngine: policyEngine, // Initialize bucket policy engine
+ inFlightDataLimitCond: sync.NewCond(new(sync.Mutex)),
}
+ // Set s3a reference in circuit breaker for upload limiting
+ s3ApiServer.cb.s3a = s3ApiServer
+
// Pass policy engine to IAM for bucket policy evaluation
// This avoids circular dependency by not passing the entire S3ApiServer
iam.policyEngine = policyEngine
diff --git a/weed/s3api/s3err/s3api_errors.go b/weed/s3api/s3err/s3api_errors.go
index 762289bce..189c6ba86 100644
--- a/weed/s3api/s3err/s3api_errors.go
+++ b/weed/s3api/s3err/s3api_errors.go
@@ -498,12 +498,12 @@ var errorCodeResponse = map[ErrorCode]APIError{
ErrTooManyRequest: {
Code: "ErrTooManyRequest",
Description: "Too many simultaneous request count",
- HTTPStatusCode: http.StatusTooManyRequests,
+ HTTPStatusCode: http.StatusServiceUnavailable,
},
ErrRequestBytesExceed: {
Code: "ErrRequestBytesExceed",
Description: "Simultaneous request bytes exceed limitations",
- HTTPStatusCode: http.StatusTooManyRequests,
+ HTTPStatusCode: http.StatusServiceUnavailable,
},
OwnershipControlsNotFoundError: {
diff --git a/weed/server/filer_server.go b/weed/server/filer_server.go
index 3d08c0980..95d344af4 100644
--- a/weed/server/filer_server.go
+++ b/weed/server/filer_server.go
@@ -56,32 +56,34 @@ import (
)
type FilerOption struct {
- Masters *pb.ServerDiscovery
- FilerGroup string
- Collection string
- DefaultReplication string
- DisableDirListing bool
- MaxMB int
- DirListingLimit int
- DataCenter string
- Rack string
- DataNode string
- DefaultLevelDbDir string
- DisableHttp bool
- Host pb.ServerAddress
- recursiveDelete bool
- Cipher bool
- SaveToFilerLimit int64
- ConcurrentUploadLimit int64
- ShowUIDirectoryDelete bool
- DownloadMaxBytesPs int64
- DiskType string
- AllowedOrigins []string
- ExposeDirectoryData bool
+ Masters *pb.ServerDiscovery
+ FilerGroup string
+ Collection string
+ DefaultReplication string
+ DisableDirListing bool
+ MaxMB int
+ DirListingLimit int
+ DataCenter string
+ Rack string
+ DataNode string
+ DefaultLevelDbDir string
+ DisableHttp bool
+ Host pb.ServerAddress
+ recursiveDelete bool
+ Cipher bool
+ SaveToFilerLimit int64
+ ConcurrentUploadLimit int64
+ ConcurrentFileUploadLimit int64
+ ShowUIDirectoryDelete bool
+ DownloadMaxBytesPs int64
+ DiskType string
+ AllowedOrigins []string
+ ExposeDirectoryData bool
}
type FilerServer struct {
inFlightDataSize int64
+ inFlightUploads int64
listenersWaits int64
// notifying clients
diff --git a/weed/server/filer_server_handlers.go b/weed/server/filer_server_handlers.go
index dcfc8e3ed..a2eab9365 100644
--- a/weed/server/filer_server_handlers.go
+++ b/weed/server/filer_server_handlers.go
@@ -95,14 +95,28 @@ func (fs *FilerServer) filerHandler(w http.ResponseWriter, r *http.Request) {
contentLength := getContentLength(r)
fs.inFlightDataLimitCond.L.Lock()
inFlightDataSize := atomic.LoadInt64(&fs.inFlightDataSize)
- for fs.option.ConcurrentUploadLimit != 0 && inFlightDataSize > fs.option.ConcurrentUploadLimit {
- glog.V(4).Infof("wait because inflight data %d > %d", inFlightDataSize, fs.option.ConcurrentUploadLimit)
+ inFlightUploads := atomic.LoadInt64(&fs.inFlightUploads)
+
+ // Wait if either data size limit or file count limit is exceeded
+ for (fs.option.ConcurrentUploadLimit != 0 && inFlightDataSize > fs.option.ConcurrentUploadLimit) || (fs.option.ConcurrentFileUploadLimit != 0 && inFlightUploads >= fs.option.ConcurrentFileUploadLimit) {
+ if (fs.option.ConcurrentUploadLimit != 0 && inFlightDataSize > fs.option.ConcurrentUploadLimit) {
+ glog.V(4).Infof("wait because inflight data %d > %d", inFlightDataSize, fs.option.ConcurrentUploadLimit)
+ }
+ if (fs.option.ConcurrentFileUploadLimit != 0 && inFlightUploads >= fs.option.ConcurrentFileUploadLimit) {
+ glog.V(4).Infof("wait because inflight uploads %d >= %d", inFlightUploads, fs.option.ConcurrentFileUploadLimit)
+ }
fs.inFlightDataLimitCond.Wait()
inFlightDataSize = atomic.LoadInt64(&fs.inFlightDataSize)
+ inFlightUploads = atomic.LoadInt64(&fs.inFlightUploads)
}
fs.inFlightDataLimitCond.L.Unlock()
+
+ // Increment counters
+ atomic.AddInt64(&fs.inFlightUploads, 1)
atomic.AddInt64(&fs.inFlightDataSize, contentLength)
defer func() {
+ // Decrement counters
+ atomic.AddInt64(&fs.inFlightUploads, -1)
atomic.AddInt64(&fs.inFlightDataSize, -contentLength)
fs.inFlightDataLimitCond.Signal()
}()