diff options
| author | Chris Lu <chrislusf@users.noreply.github.com> | 2025-07-30 12:38:03 -0700 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2025-07-30 12:38:03 -0700 |
| commit | 891a2fb6ebc324329f5330a140b8cacff3899db4 (patch) | |
| tree | d02aaa80a909e958aea831f206b3240b0237d7b7 /weed/server/volume_grpc_copy.go | |
| parent | 64198dad8346fe284cbef944fe01ff0d062c147d (diff) | |
| download | seaweedfs-891a2fb6ebc324329f5330a140b8cacff3899db4.tar.xz seaweedfs-891a2fb6ebc324329f5330a140b8cacff3899db4.zip | |
Admin: misc improvements on admin server and workers. EC now works. (#7055)
* initial design
* added simulation as tests
* reorganized the codebase to move the simulation framework and tests into their own dedicated package
* integration test. ec worker task
* remove "enhanced" reference
* start master, volume servers, filer
Current Status
✅ Master: Healthy and running (port 9333)
✅ Filer: Healthy and running (port 8888)
✅ Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready
* generate write load
* tasks are assigned
* admin start wtih grpc port. worker has its own working directory
* Update .gitignore
* working worker and admin. Task detection is not working yet.
* compiles, detection uses volumeSizeLimitMB from master
* compiles
* worker retries connecting to admin
* build and restart
* rendering pending tasks
* skip task ID column
* sticky worker id
* test canScheduleTaskNow
* worker reconnect to admin
* clean up logs
* worker register itself first
* worker can run ec work and report status
but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.
* move ec task logic
* listing ec shards
* local copy, ec. Need to distribute.
* ec is mostly working now
* distribution of ec shards needs improvement
* need configuration to enable ec
* show ec volumes
* interval field UI component
* rename
* integration test with vauuming
* garbage percentage threshold
* fix warning
* display ec shard sizes
* fix ec volumes list
* Update ui.go
* show default values
* ensure correct default value
* MaintenanceConfig use ConfigField
* use schema defined defaults
* config
* reduce duplication
* refactor to use BaseUIProvider
* each task register its schema
* checkECEncodingCandidate use ecDetector
* use vacuumDetector
* use volumeSizeLimitMB
* remove
remove
* remove unused
* refactor
* use new framework
* remove v2 reference
* refactor
* left menu can scroll now
* The maintenance manager was not being initialized when no data directory was configured for persistent storage.
* saving config
* Update task_config_schema_templ.go
* enable/disable tasks
* protobuf encoded task configurations
* fix system settings
* use ui component
* remove logs
* interface{} Reduction
* reduce interface{}
* reduce interface{}
* avoid from/to map
* reduce interface{}
* refactor
* keep it DRY
* added logging
* debug messages
* debug level
* debug
* show the log caller line
* use configured task policy
* log level
* handle admin heartbeat response
* Update worker.go
* fix EC rack and dc count
* Report task status to admin server
* fix task logging, simplify interface checking, use erasure_coding constants
* factor in empty volume server during task planning
* volume.list adds disk id
* track disk id also
* fix locking scheduled and manual scanning
* add active topology
* simplify task detector
* ec task completed, but shards are not showing up
* implement ec in ec_typed.go
* adjust log level
* dedup
* implementing ec copying shards and only ecx files
* use disk id when distributing ec shards
🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned
* Delete original volume from all locations
* clean up existing shard locations
* local encoding and distributing
* Update docker/admin_integration/EC-TESTING-README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* check volume id range
* simplify
* fix tests
* fix types
* clean up logs and tests
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Diffstat (limited to 'weed/server/volume_grpc_copy.go')
| -rw-r--r-- | weed/server/volume_grpc_copy.go | 117 |
1 files changed, 117 insertions, 0 deletions
diff --git a/weed/server/volume_grpc_copy.go b/weed/server/volume_grpc_copy.go index 0e733fc0a..84a9035ca 100644 --- a/weed/server/volume_grpc_copy.go +++ b/weed/server/volume_grpc_copy.go @@ -402,3 +402,120 @@ func (vs *VolumeServer) CopyFile(req *volume_server_pb.CopyFileRequest, stream v return nil } + +// ReceiveFile receives a file stream from client and writes it to storage +func (vs *VolumeServer) ReceiveFile(stream volume_server_pb.VolumeServer_ReceiveFileServer) error { + var fileInfo *volume_server_pb.ReceiveFileInfo + var targetFile *os.File + var filePath string + var bytesWritten uint64 + + defer func() { + if targetFile != nil { + targetFile.Close() + } + }() + + for { + req, err := stream.Recv() + if err == io.EOF { + // Stream completed successfully + if targetFile != nil { + targetFile.Sync() + glog.V(1).Infof("Successfully received file %s (%d bytes)", filePath, bytesWritten) + } + return stream.SendAndClose(&volume_server_pb.ReceiveFileResponse{ + BytesWritten: bytesWritten, + }) + } + if err != nil { + // Clean up on error + if targetFile != nil { + targetFile.Close() + os.Remove(filePath) + } + glog.Errorf("Failed to receive stream: %v", err) + return fmt.Errorf("failed to receive stream: %v", err) + } + + switch data := req.Data.(type) { + case *volume_server_pb.ReceiveFileRequest_Info: + // First message contains file info + fileInfo = data.Info + glog.V(1).Infof("ReceiveFile: volume %d, ext %s, collection %s, shard %d, size %d", + fileInfo.VolumeId, fileInfo.Ext, fileInfo.Collection, fileInfo.ShardId, fileInfo.FileSize) + + // Create file path based on file info + if fileInfo.IsEcVolume { + // Find storage location for EC shard + var targetLocation *storage.DiskLocation + for _, location := range vs.store.Locations { + if location.DiskType == types.HardDriveType { + targetLocation = location + break + } + } + if targetLocation == nil && len(vs.store.Locations) > 0 { + targetLocation = vs.store.Locations[0] // Fall back to first available location + } + if targetLocation == nil { + glog.Errorf("ReceiveFile: no storage location available") + return stream.SendAndClose(&volume_server_pb.ReceiveFileResponse{ + Error: "no storage location available", + }) + } + + // Create EC shard file path + baseFileName := erasure_coding.EcShardBaseFileName(fileInfo.Collection, int(fileInfo.VolumeId)) + filePath = util.Join(targetLocation.Directory, baseFileName+fileInfo.Ext) + } else { + // Regular volume file + v := vs.store.GetVolume(needle.VolumeId(fileInfo.VolumeId)) + if v == nil { + glog.Errorf("ReceiveFile: volume %d not found", fileInfo.VolumeId) + return stream.SendAndClose(&volume_server_pb.ReceiveFileResponse{ + Error: fmt.Sprintf("volume %d not found", fileInfo.VolumeId), + }) + } + filePath = v.FileName(fileInfo.Ext) + } + + // Create target file + targetFile, err = os.Create(filePath) + if err != nil { + glog.Errorf("ReceiveFile: failed to create file %s: %v", filePath, err) + return stream.SendAndClose(&volume_server_pb.ReceiveFileResponse{ + Error: fmt.Sprintf("failed to create file: %v", err), + }) + } + glog.V(1).Infof("ReceiveFile: created target file %s", filePath) + + case *volume_server_pb.ReceiveFileRequest_FileContent: + // Subsequent messages contain file content + if targetFile == nil { + glog.Errorf("ReceiveFile: file info must be sent first") + return stream.SendAndClose(&volume_server_pb.ReceiveFileResponse{ + Error: "file info must be sent first", + }) + } + + n, err := targetFile.Write(data.FileContent) + if err != nil { + targetFile.Close() + os.Remove(filePath) + glog.Errorf("ReceiveFile: failed to write to file %s: %v", filePath, err) + return stream.SendAndClose(&volume_server_pb.ReceiveFileResponse{ + Error: fmt.Sprintf("failed to write file: %v", err), + }) + } + bytesWritten += uint64(n) + glog.V(2).Infof("ReceiveFile: wrote %d bytes to %s (total: %d)", n, filePath, bytesWritten) + + default: + glog.Errorf("ReceiveFile: unknown message type") + return stream.SendAndClose(&volume_server_pb.ReceiveFileResponse{ + Error: "unknown message type", + }) + } + } +} |
