aboutsummaryrefslogtreecommitdiff
path: root/weed/server/volume_server_handlers_admin.go
AgeCommit message (Collapse)AuthorFilesLines
2025-12-02fix: volume server healthz now checks local conditions only (#7610)Chris Lu1-13/+18
This fixes issue #6823 where a single volume server shutdown would cause other healthy volume servers to fail their health checks and get restarted by Kubernetes, causing a cascading failure. Previously, the healthz handler checked if all replicated volumes could reach their remote replicas via GetWritableRemoteReplications(). When a volume server went down, the master would remove it from the volume location list. Other volume servers would then fail their healthz checks because they couldn't find all required replicas, causing Kubernetes to restart them. The healthz endpoint now only checks local conditions: 1. Is the server shutting down? 2. Is the server heartbeating with the master? This follows the principle that a health check should only verify the health of THIS server, not the overall cluster state. Fixes #6823
2025-07-30Admin: misc improvements on admin server and workers. EC now works. (#7055)Chris Lu1-2/+3
* initial design * added simulation as tests * reorganized the codebase to move the simulation framework and tests into their own dedicated package * integration test. ec worker task * remove "enhanced" reference * start master, volume servers, filer Current Status ✅ Master: Healthy and running (port 9333) ✅ Filer: Healthy and running (port 8888) ✅ Volume Servers: All 6 servers running (ports 8080-8085) 🔄 Admin/Workers: Will start when dependencies are ready * generate write load * tasks are assigned * admin start wtih grpc port. worker has its own working directory * Update .gitignore * working worker and admin. Task detection is not working yet. * compiles, detection uses volumeSizeLimitMB from master * compiles * worker retries connecting to admin * build and restart * rendering pending tasks * skip task ID column * sticky worker id * test canScheduleTaskNow * worker reconnect to admin * clean up logs * worker register itself first * worker can run ec work and report status but: 1. one volume should not be repeatedly worked on. 2. ec shards needs to be distributed and source data should be deleted. * move ec task logic * listing ec shards * local copy, ec. Need to distribute. * ec is mostly working now * distribution of ec shards needs improvement * need configuration to enable ec * show ec volumes * interval field UI component * rename * integration test with vauuming * garbage percentage threshold * fix warning * display ec shard sizes * fix ec volumes list * Update ui.go * show default values * ensure correct default value * MaintenanceConfig use ConfigField * use schema defined defaults * config * reduce duplication * refactor to use BaseUIProvider * each task register its schema * checkECEncodingCandidate use ecDetector * use vacuumDetector * use volumeSizeLimitMB * remove remove * remove unused * refactor * use new framework * remove v2 reference * refactor * left menu can scroll now * The maintenance manager was not being initialized when no data directory was configured for persistent storage. * saving config * Update task_config_schema_templ.go * enable/disable tasks * protobuf encoded task configurations * fix system settings * use ui component * remove logs * interface{} Reduction * reduce interface{} * reduce interface{} * avoid from/to map * reduce interface{} * refactor * keep it DRY * added logging * debug messages * debug level * debug * show the log caller line * use configured task policy * log level * handle admin heartbeat response * Update worker.go * fix EC rack and dc count * Report task status to admin server * fix task logging, simplify interface checking, use erasure_coding constants * factor in empty volume server during task planning * volume.list adds disk id * track disk id also * fix locking scheduled and manual scanning * add active topology * simplify task detector * ec task completed, but shards are not showing up * implement ec in ec_typed.go * adjust log level * dedup * implementing ec copying shards and only ecx files * use disk id when distributing ec shards 🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk 📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId 🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest 💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId]) 📂 File System: EC shards and metadata land in the exact disk directory planned * Delete original volume from all locations * clean up existing shard locations * local encoding and distributing * Update docker/admin_integration/EC-TESTING-README.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * check volume id range * simplify * fix tests * fix types * clean up logs and tests --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-06-03change version directorychrislu1-6/+6
2022-07-29move to https://github.com/seaweedfs/seaweedfschrislu1-4/+4
2022-02-16healthz check to avoid drain pod with last replicasKonstantin Lebedev1-0/+19
2021-02-13support customizable disk typeChris Lu1-2/+2
2020-12-14adjust volume server UIChris Lu1-2/+6
2020-09-20refactoringChris Lu1-0/+2
2020-06-02inject git version into buildChris Lu1-2/+2
2020-02-23staus route: add DiskStatuses for disk in the volume server statusLazyDBA247-Anyvision1-0/+7
whem monitoring server, better to know the status of the disks & volumes in a single route.
2019-12-02add lock variableChris Lu1-1/+1
2018-10-23go fmtChris Lu1-1/+1
2018-10-15move DiskStatus and MemStatus to protobufChris Lu1-1/+2
2018-10-15migrate volume sync to gRpcChris Lu1-17/+3
2018-10-15move volume mount/unmount on volume server to grpcChris Lu1-20/+0
2018-10-15remove volume server /admin/volume/deleteChris Lu1-10/+0
2018-10-15migrate volume sync status to grpc API on volume serverChris Lu1-3/+4
2018-10-15migrate assign volume to grpc API on volume serverChris Lu1-30/+1
2018-10-15migrate delete collection to grpc API on volume serverChris Lu1-10/+0
2018-05-31fix logChris Lu1-2/+2
2018-05-27go fmtChris Lu1-1/+1
2017-01-20Delete volumes online without restarting volume serverbrstgt1-0/+44
2017-01-08support Fallocate on linuxChris Lu1-3/+20
2016-06-02directory structure change to work with glideChris Lu1-0/+50
glide has its own requirements. My previous workaround caused me some code checkin errors. Need to fix this.