| Age | Commit message (Collapse) | Author | Files | Lines |
|
mount: remove unused isEarlyTerminated variable
The variable was redundant because when processEachEntryFn returns false,
we immediately return fuse.OK, so the check was always false.
|
|
* fix nfs list with prefix batch scan
* remove else branch
|
|
* mount: fix weed inode nlookup do not equel kernel inode nlookup
* mount: add underflow protection for nlookup decrement in Forget
* mount: use consistent == 0 check for uint64 nlookup
* Update weed/mount/inode_to_path.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* mount: snapshot data before unlock in Forget to avoid using deleted InodeEntry
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
|
* mount: add mutex to DirectoryHandle to fix race condition
When using Ganesha NFS on top of FUSE mount, ls operations would hang
forever on directories with hundreds of files. This was caused by a
race condition in DirectoryHandle where multiple concurrent readdir
operations could modify shared state (entryStream, entryStreamOffset,
isFinished) without synchronization.
The fix adds a mutex to DirectoryHandle and holds it for the entire
duration of doReadDirectory. This serializes concurrent readdir calls
on the same handle, which is the correct behavior for a directory
handle and fixes the race condition.
Key changes:
- Added sync.Mutex to DirectoryHandle struct
- Lock the mutex at the start of doReadDirectory
- This ensures thread-safe access to entryStream and other state
The lock is per-handle (not global), so different directories can
still be listed concurrently. Only concurrent operations on the
same directory handle are serialized.
Fixes: https://github.com/seaweedfs/seaweedfs/issues/7672
* mount: add mutex to DirectoryHandle to fix race condition
When using Ganesha NFS on top of FUSE mount, ls operations would hang
forever on directories with hundreds of files. This was caused by a
race condition in DirectoryHandle where multiple concurrent readdir
operations could modify shared state (entryStream, entryStreamOffset,
isFinished) without synchronization.
The fix adds a mutex to DirectoryHandle and holds it for the entire
duration of doReadDirectory. This serializes concurrent readdir calls
on the same handle, which is the correct behavior for a directory
handle and fixes the race condition.
Key changes:
- Added sync.Mutex to DirectoryHandle struct
- Lock the mutex at the start of doReadDirectory
- Optimized reset() to reuse slice capacity and allow GC of old entries
The lock is per-handle (not global), so different directories can
still be listed concurrently. Only concurrent operations on the
same directory handle are serialized.
Fixes: https://github.com/seaweedfs/seaweedfs/issues/7672
|
|
* added error return in type ListEachEntryFunc
* return error if errClose
* fix fmt.Errorf
* fix return errClose
* use %w fmt.Errorf
* added entry in messege error
* add callbackErr in ListDirectoryEntries
* fix error
* add log
* clear err when the scanner stops on io.EOF, so returning err doesn’t surface EOF as a failure.
* more info in error
* add ctx to logs, error handling
* fix return eachEntryFunc
* fix
* fix log
* fix return
* fix foundationdb test s
* fix eachEntryFunc
* fix return resEachEntryFuncErr
* Update weed/filer/filer.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/filer/elastic/v7/elastic_store.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/filer/hbase/hbase_store.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/filer/foundationdb/foundationdb_store.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/filer/ydb/ydb_store.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* fix
* add scanErr
---------
Co-authored-by: Roman Tamarov <r.tamarov@kryptonite.ru>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
|
|
|
|
|
|
|
* reduce locks to avoid dead lock
Flush->FlushData->uplloadPipeline.FluahAll
uploaderCount>0
goroutine 1 [sync.Cond.Wait, 71 minutes]:
sync.runtime_notifyListWait(0xc0007ae4d0, 0x0)
/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001a59290?)
/usr/local/go/src/sync/cond.go:70 +0x85
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*UploadPipeline).waitForCurrentWritersToComplete(0xc0002ee4d0)
/github/workspace/weed/mount/page_writer/upload_pipeline_lock.go:58 +0x32
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*UploadPipeline).FlushAll(0xc0002ee4d0)
/github/workspace/weed/mount/page_writer/upload_pipeline.go:151 +0x25
github.com/seaweedfs/seaweedfs/weed/mount.(*ChunkedDirtyPages).FlushData(0xc00087e840)
/github/workspace/weed/mount/dirty_pages_chunked.go:54 +0x29
github.com/seaweedfs/seaweedfs/weed/mount.(*PageWriter).FlushData(...)
/github/workspace/weed/mount/page_writer.go:50
github.com/seaweedfs/seaweedfs/weed/mount.(*WFS).doFlush(0xc0006ad600, 0xc00030d380, 0x0, 0x0)
/github/workspace/weed/mount/weedfs_file_sync.go:101 +0x169
github.com/seaweedfs/seaweedfs/weed/mount.(*WFS).Flush(0xc0006ad600, 0xc001a594a8?, 0xc0004c1ca0)
/github/workspace/weed/mount/weedfs_file_sync.go:59 +0x48
github.com/hanwen/go-fuse/v2/fuse.doFlush(0xc0000da870?, 0xc0004c1b08)
SaveContent -> MemChunk.RLock ->
ChunkedDirtyPages.saveChunkedFileIntervalToStorage
pages.fh.AddChunks([]*filer_pb.FileChunk{chunk})
fh.entryLock.Lock()
sync.(*RWMutex).Lock(0x0?)
/usr/local/go/src/sync/rwmutex.go:146 +0x31
github.com/seaweedfs/seaweedfs/weed/mount.(*FileHandle).AddChunks(0xc00030d380, {0xc00028bdc8, 0x1, 0x1})
/github/workspace/weed/mount/filehandle.go:93 +0x45
github.com/seaweedfs/seaweedfs/weed/mount.(*ChunkedDirtyPages).saveChunkedFileIntervalToStorage(0xc00087e840, {0x2be7ac0, 0xc00018d9e0}, 0x0, 0x121, 0x17e3c624565ace45, 0x1?)
/github/workspace/weed/mount/dirty_pages_chunked.go:80 +0x2d4
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*MemChunk).SaveContent(0xc0008d9130, 0xc0008093e0)
/github/workspace/weed/mount/page_writer/page_chunk_mem.go:115 +0x112
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*UploadPipeline).moveToSealed.func1()
/github/workspace/weed/mount/page_writer/upload_pipeline.go:187 +0x55
github.com/seaweedfs/seaweedfs/weed/util.(*LimitedConcurrentExecutor).Execute.func1()
/github/workspace/weed/util/limited_executor.go:38 +0x62
created by github.com/seaweedfs/seaweedfs/weed/util.(*LimitedConcurrentExecutor).Execute in goroutine 1
/github/workspace/weed/util/limited_executor.go:33 +0x97
On metadata update
fh.entryLock.Lock()
fh.dirtyPages.Destroy()
up.chunksLock.Lock => each sealed chunk.FreeReference => MemChunk.Lock
goroutine 134 [sync.RWMutex.Lock, 71 minutes]:
sync.runtime_SemacquireRWMutex(0xc0007c3558?, 0xea?, 0x3fb0800?)
/usr/local/go/src/runtime/sema.go:87 +0x25
sync.(*RWMutex).Lock(0xc0007c35a8?)
/usr/local/go/src/sync/rwmutex.go:151 +0x6a
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*MemChunk).FreeResource(0xc0008d9130)
/github/workspace/weed/mount/page_writer/page_chunk_mem.go:38 +0x2a
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*SealedChunk).FreeReference(0xc00071cdb0, {0xc0006ba1a0, 0x20})
/github/workspace/weed/mount/page_writer/upload_pipeline.go:38 +0xb7
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*UploadPipeline).Shutdown(0xc0002ee4d0)
/github/workspace/weed/mount/page_writer/upload_pipeline.go:220 +0x185
github.com/seaweedfs/seaweedfs/weed/mount.(*ChunkedDirtyPages).Destroy(0xc0008cea40?)
/github/workspace/weed/mount/dirty_pages_chunked.go:87 +0x17
github.com/seaweedfs/seaweedfs/weed/mount.(*PageWriter).Destroy(...)
/github/workspace/weed/mount/page_writer.go:78
github.com/seaweedfs/seaweedfs/weed/mount.NewSeaweedFileSystem.func3({0xc00069a6c0, 0x30}, 0x6?)
/github/workspace/weed/mount/weedfs.go:119 +0x17a
github.com/seaweedfs/seaweedfs/weed/mount/meta_cache.NewMetaCache.func1({0xc00069a6c0?, 0xc00069a480?}, 0x4015b40?)
/github/workspace/weed/mount/meta_cache/meta_cache.go:37 +0x1c
github.com/seaweedfs/seaweedfs/weed/mount/meta_cache.SubscribeMetaEvents.func1(0xc000661810)
/github/workspace/weed/mount/meta_cache/meta_cache_subscribe.go:43 +0x570
* use locked entry everywhere
* modifiable remote entry
* skip locking after getting lock from fhLockTable
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|