| Age | Commit message (Collapse) | Author | Files | Lines |
|
- CRITICAL: Make socket path configurable based on mountEndpoint
- Added volumeSocketDir field to SeaweedFsDriver
- LocalSocketPath now accepts baseDir parameter
- Derived from mountEndpoint for user-configurable socket paths
- HIGH: Pin seaweedfs version in Dockerfiles for reproducible builds
- Added SEAWEEDFS_VERSION build arg (default: 3.80)
- Clone specific tag instead of master
- HIGH: Fix Dockerfile.dev to use local context instead of personal fork
- Removed hardcoded zemul/seaweedfs-csi-driver clone
- Now uses COPY . . for local development
- HIGH: Change :latest to :dev in kubernetes manifests
- Mutable :latest tag replaced with :dev for predictability
- MEDIUM: Remove Aliyun mirror from Dockerfile.dev
- Region-specific mirrors shouldn't be in general-purpose files
- MEDIUM: Improve error handling in client.go
- Now reports read errors when failing to read error response body
- MEDIUM: Fix inconsistent error return in manager.go
- Return nil instead of empty struct on error (Go idiom)
|
|
|
|
the CSI components to call it.
|
|
Address gemini-code-assist review feedback:
1. Return error from volume.Quota() failure in stageNewVolume - quota
failures should fail the staging operation
2. Return error from cleanupStaleStagingPath() in NodeStageVolume -
fail fast if cleanup fails rather than attempting to stage anyway
3. Return error from cleanupStaleStagingPath() in NodePublishVolume -
same fail-fast behavior for consistency
4. Return error from mount.CleanupMountPoint() in Volume.Unstage() -
propagate cleanup errors to caller as expected
|
|
- Handle unexpected stat errors in cleanupStaleStagingPath (high priority)
- Extract staging logic into stageNewVolume helper method for reuse
- Extract isReadOnlyAccessMode helper to avoid duplicated read-only checks
- Remove redundant mountutil.Unmount call (CleanupMountPoint already handles it)
|
|
This addresses issue #203 - CSI Driver Self-Healing for Volume Mount Failures.
Problem:
When the CSI node driver restarts, the in-memory volume cache is lost.
Kubelet then directly calls NodePublishVolume (skipping NodeStageVolume),
which fails with 'volume hasn't been staged yet' error.
Solution:
1. Added isStagingPathHealthy() to detect healthy vs stale/corrupted mounts
2. Added cleanupStaleStagingPath() to clean up stale mount points
3. Enhanced NodeStageVolume to clean up stale mounts before staging
4. Implemented self-healing in NodePublishVolume:
- If staging path is healthy: rebuild volume cache from existing mount
- If staging path is stale: clean up and re-stage automatically
5. Updated Volume.Unstage to handle rebuilt volumes without unmounter
Benefits:
- Automatic recovery after CSI driver restarts
- No manual intervention required (no kubelet/pod restarts needed)
- Handles both live and dead FUSE mount scenarios
- Backward compatible with normal operations
Fixes #203
|
|
|
|
This reverts commit 44283c0ffe56e3180dae5b93801d07a3d621d355.
|
|
This reverts commit 4f60c279001475dcb398ed8c852ff2c6e366e16e.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
k8s.io/mount-utils
|
|
|
|
Use pid from cmd.Process instead of /proc lookup
Use mount specific mutex
Log fuse mount process stderr and stdout for problems investigation
|
|
|
|
|
|
|
|
|
|
|
|
|