diff options
| author | Chris Lu <chrislusf@users.noreply.github.com> | 2025-07-30 12:38:03 -0700 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2025-07-30 12:38:03 -0700 |
| commit | 891a2fb6ebc324329f5330a140b8cacff3899db4 (patch) | |
| tree | d02aaa80a909e958aea831f206b3240b0237d7b7 /weed/worker/client_tls_test.go | |
| parent | 64198dad8346fe284cbef944fe01ff0d062c147d (diff) | |
| download | seaweedfs-891a2fb6ebc324329f5330a140b8cacff3899db4.tar.xz seaweedfs-891a2fb6ebc324329f5330a140b8cacff3899db4.zip | |
Admin: misc improvements on admin server and workers. EC now works. (#7055)
* initial design
* added simulation as tests
* reorganized the codebase to move the simulation framework and tests into their own dedicated package
* integration test. ec worker task
* remove "enhanced" reference
* start master, volume servers, filer
Current Status
✅ Master: Healthy and running (port 9333)
✅ Filer: Healthy and running (port 8888)
✅ Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready
* generate write load
* tasks are assigned
* admin start wtih grpc port. worker has its own working directory
* Update .gitignore
* working worker and admin. Task detection is not working yet.
* compiles, detection uses volumeSizeLimitMB from master
* compiles
* worker retries connecting to admin
* build and restart
* rendering pending tasks
* skip task ID column
* sticky worker id
* test canScheduleTaskNow
* worker reconnect to admin
* clean up logs
* worker register itself first
* worker can run ec work and report status
but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.
* move ec task logic
* listing ec shards
* local copy, ec. Need to distribute.
* ec is mostly working now
* distribution of ec shards needs improvement
* need configuration to enable ec
* show ec volumes
* interval field UI component
* rename
* integration test with vauuming
* garbage percentage threshold
* fix warning
* display ec shard sizes
* fix ec volumes list
* Update ui.go
* show default values
* ensure correct default value
* MaintenanceConfig use ConfigField
* use schema defined defaults
* config
* reduce duplication
* refactor to use BaseUIProvider
* each task register its schema
* checkECEncodingCandidate use ecDetector
* use vacuumDetector
* use volumeSizeLimitMB
* remove
remove
* remove unused
* refactor
* use new framework
* remove v2 reference
* refactor
* left menu can scroll now
* The maintenance manager was not being initialized when no data directory was configured for persistent storage.
* saving config
* Update task_config_schema_templ.go
* enable/disable tasks
* protobuf encoded task configurations
* fix system settings
* use ui component
* remove logs
* interface{} Reduction
* reduce interface{}
* reduce interface{}
* avoid from/to map
* reduce interface{}
* refactor
* keep it DRY
* added logging
* debug messages
* debug level
* debug
* show the log caller line
* use configured task policy
* log level
* handle admin heartbeat response
* Update worker.go
* fix EC rack and dc count
* Report task status to admin server
* fix task logging, simplify interface checking, use erasure_coding constants
* factor in empty volume server during task planning
* volume.list adds disk id
* track disk id also
* fix locking scheduled and manual scanning
* add active topology
* simplify task detector
* ec task completed, but shards are not showing up
* implement ec in ec_typed.go
* adjust log level
* dedup
* implementing ec copying shards and only ecx files
* use disk id when distributing ec shards
🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned
* Delete original volume from all locations
* clean up existing shard locations
* local encoding and distributing
* Update docker/admin_integration/EC-TESTING-README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* check volume id range
* simplify
* fix tests
* fix types
* clean up logs and tests
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Diffstat (limited to 'weed/worker/client_tls_test.go')
| -rw-r--r-- | weed/worker/client_tls_test.go | 146 |
1 files changed, 0 insertions, 146 deletions
diff --git a/weed/worker/client_tls_test.go b/weed/worker/client_tls_test.go deleted file mode 100644 index d95d5f4f5..000000000 --- a/weed/worker/client_tls_test.go +++ /dev/null @@ -1,146 +0,0 @@ -package worker - -import ( - "strings" - "testing" - "time" - - "google.golang.org/grpc" - "google.golang.org/grpc/credentials/insecure" -) - -func TestGrpcClientTLSDetection(t *testing.T) { - // Test that the client can be created with a dial option - dialOption := grpc.WithTransportCredentials(insecure.NewCredentials()) - client := NewGrpcAdminClient("localhost:33646", "test-worker", dialOption) - - // Test that the client has the correct dial option - if client.dialOption == nil { - t.Error("Client should have a dial option") - } - - t.Logf("Client created successfully with dial option") -} - -func TestCreateAdminClientGrpc(t *testing.T) { - // Test client creation - admin server port gets transformed to gRPC port - dialOption := grpc.WithTransportCredentials(insecure.NewCredentials()) - client, err := CreateAdminClient("localhost:23646", "test-worker", dialOption) - if err != nil { - t.Fatalf("Failed to create admin client: %v", err) - } - - if client == nil { - t.Fatal("Client should not be nil") - } - - // Verify it's the correct type - grpcClient, ok := client.(*GrpcAdminClient) - if !ok { - t.Fatal("Client should be GrpcAdminClient type") - } - - // The admin address should be transformed to the gRPC port (HTTP + 10000) - expectedAddress := "localhost:33646" // 23646 + 10000 - if grpcClient.adminAddress != expectedAddress { - t.Errorf("Expected admin address %s, got %s", expectedAddress, grpcClient.adminAddress) - } - - if grpcClient.workerID != "test-worker" { - t.Errorf("Expected worker ID test-worker, got %s", grpcClient.workerID) - } -} - -func TestConnectionTimeouts(t *testing.T) { - // Test that connections have proper timeouts - // Use localhost with a port that's definitely closed - dialOption := grpc.WithTransportCredentials(insecure.NewCredentials()) - client := NewGrpcAdminClient("localhost:1", "test-worker", dialOption) // Port 1 is reserved and won't be open - - // Test that the connection creation fails when actually trying to use it - start := time.Now() - err := client.Connect() // This should fail when trying to establish the stream - duration := time.Since(start) - - if err == nil { - t.Error("Expected connection to closed port to fail") - } else { - t.Logf("Connection failed as expected: %v", err) - } - - // Should fail quickly but not too quickly - if duration > 10*time.Second { - t.Errorf("Connection attempt took too long: %v", duration) - } -} - -func TestConnectionWithDialOption(t *testing.T) { - // Test that the connection uses the provided dial option - dialOption := grpc.WithTransportCredentials(insecure.NewCredentials()) - client := NewGrpcAdminClient("localhost:1", "test-worker", dialOption) // Port 1 is reserved and won't be open - - // Test the actual connection - err := client.Connect() - if err == nil { - t.Error("Expected connection to closed port to fail") - client.Disconnect() // Clean up if it somehow succeeded - } else { - t.Logf("Connection failed as expected: %v", err) - } - - // The error should indicate a connection failure - if err != nil && err.Error() != "" { - t.Logf("Connection error message: %s", err.Error()) - // The error should contain connection-related terms - if !strings.Contains(err.Error(), "connection") && !strings.Contains(err.Error(), "dial") { - t.Logf("Error message doesn't indicate connection issues: %s", err.Error()) - } - } -} - -func TestClientWithSecureDialOption(t *testing.T) { - // Test that the client correctly uses a secure dial option - // This would normally use LoadClientTLS, but for testing we'll use insecure - dialOption := grpc.WithTransportCredentials(insecure.NewCredentials()) - client := NewGrpcAdminClient("localhost:33646", "test-worker", dialOption) - - if client.dialOption == nil { - t.Error("Client should have a dial option") - } - - t.Logf("Client created successfully with dial option") -} - -func TestConnectionWithRealAddress(t *testing.T) { - // Test connection behavior with a real address that doesn't support gRPC - dialOption := grpc.WithTransportCredentials(insecure.NewCredentials()) - client := NewGrpcAdminClient("www.google.com:80", "test-worker", dialOption) // HTTP port, not gRPC - - err := client.Connect() - if err == nil { - t.Log("Connection succeeded unexpectedly") - client.Disconnect() - } else { - t.Logf("Connection failed as expected: %v", err) - } -} - -func TestDialOptionUsage(t *testing.T) { - // Test that the provided dial option is used for connections - dialOption := grpc.WithTransportCredentials(insecure.NewCredentials()) - client := NewGrpcAdminClient("localhost:1", "test-worker", dialOption) // Port 1 won't support gRPC at all - - // Verify the dial option is stored - if client.dialOption == nil { - t.Error("Dial option should be stored in client") - } - - // Test connection fails appropriately - err := client.Connect() - if err == nil { - t.Error("Connection should fail to non-gRPC port") - client.Disconnect() - } else { - t.Logf("Connection failed as expected: %v", err) - } -} |
