aboutsummaryrefslogtreecommitdiff
path: root/weed/server/webdav_server.go
AgeCommit message (Collapse)AuthorFilesLines
2025-11-29mount: improve read throughput with parallel chunk fetching (#7569)Chris Lu1-1/+1
* mount: improve read throughput with parallel chunk fetching This addresses issue #7504 where a single weed mount FUSE instance does not fully utilize node network bandwidth when reading large files. Changes: - Add -concurrentReaders mount option (default: 16) to control the maximum number of parallel chunk fetches during read operations - Implement parallel section reading in ChunkGroup.ReadDataAt() using errgroup for better throughput when reading across multiple sections - Enhance ReaderCache with MaybeCacheMany() to prefetch multiple chunks ahead in parallel during sequential reads (now prefetches 4 chunks) - Increase ReaderCache limit dynamically based on concurrentReaders to support higher read parallelism The bottleneck was that chunks were being read sequentially even when they reside on different volume servers. By introducing parallel chunk fetching, a single mount instance can now better saturate available network bandwidth. Fixes: #7504 * fmt * Address review comments: make prefetch configurable, improve error handling Changes: 1. Add DefaultPrefetchCount constant (4) to reader_at.go 2. Add GetPrefetchCount() method to ChunkGroup that derives prefetch count from concurrentReaders (1/4 ratio, min 1, max 8) 3. Pass prefetch count through NewChunkReaderAtFromClient 4. Fix error handling in readDataAtParallel to prioritize errgroup error 5. Update all callers to use DefaultPrefetchCount constant For mount operations, prefetch scales with -concurrentReaders: - concurrentReaders=16 (default) -> prefetch=4 - concurrentReaders=32 -> prefetch=8 (capped) - concurrentReaders=4 -> prefetch=1 For non-mount paths (WebDAV, query engine, MQ), uses DefaultPrefetchCount. * fmt * Refactor: use variadic parameter instead of new function name Use NewChunkGroup with optional concurrentReaders parameter instead of creating a separate NewChunkGroupWithConcurrency function. This maintains backward compatibility - existing callers without the parameter get the default of 16 concurrent readers. * Use explicit concurrentReaders parameter instead of variadic * Refactor: use MaybeCache with count parameter instead of new MaybeCacheMany function * Address nitpick review comments - Add upper bound (128) on concurrentReaders to prevent excessive goroutine fan-out - Cap readerCacheLimit at 256 accordingly - Fix SetChunks: use Lock() instead of RLock() since we are writing to group.sections
2025-08-06Context cancellation during reading range reading large files (#7093)Chris Lu1-4/+8
* context cancellation during reading range reading large files * address comments * cancellation for fuse read * fix cancellation * pass in context for each function to avoid racing condition * Update reader_at_test.go * remove dead code * Update weed/filer/reader_at.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/filer/filechunk_group.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/filer/filechunk_group.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * address comments * Update weed/mount/weedfs_file_read.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/mount/weedfs_file_lseek.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/mount/weedfs_file_read.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/filer/reader_at.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/mount/weedfs_file_lseek.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * test cancellation --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-16convert error fromating to %w everywhere (#6995)Chris Lu1-2/+2
2025-06-03change version directorychrislu1-1/+2
2025-05-28Add context with request (#6824)Aleksey Kosov1-1/+1
2025-05-22added context to filer_client method calls (#6808)Aleksey Kosov1-7/+7
Co-authored-by: akosov <a.kosov@kryptonite.ru>
2024-09-04Revert "weed mount, weed dav add option to force cache"chrislu1-2/+1
This reverts commit 7367b976b05bfa69158a60f205dec970c48f50f0.
2024-09-04weed mount, weed dav add option to force cachechrislu1-1/+2
2024-08-06[webdav] status code 500 if internal error from filer (#5865)Konstantin Lebedev1-5/+18
2024-07-16Added tls for http clients (#5766)vadimartynov1-1/+6
* Added global http client * Added Do func for global http client * Changed the code to use the global http client * Fix http client in volume uploader * Fixed pkg name * Fixed http util funcs * Fixed http client for bench_filer_upload * Fixed http client for stress_filer_upload * Fixed http client for filer_server_handlers_proxy * Fixed http client for command_fs_merge_volumes * Fixed http client for command_fs_merge_volumes and command_volume_fsck * Fixed http client for s3api_server * Added init global client for main funcs * Rename global_client to client * Changed: - fixed NewHttpClient; - added CheckIsHttpsClientEnabled func - updated security.toml in scaffold * Reduce the visibility of some functions in the util/http/client pkg * Added the loadSecurityConfig function * Use util.LoadSecurityConfiguration() in NewHttpClient func
2024-01-05chore: add maxMB option for webdav (#5165)Konstantin Lebedev1-2/+3
2024-01-03fix: webdav avoid create empty files (#5160)Konstantin Lebedev1-3/+9
2024-01-03fix: return etag with md5 in webdav responses (#5158)Konstantin Lebedev1-0/+6
2023-11-07weed/server: fix dropped webdav errorLars Lehtonen1-0/+3
2023-11-01refactor webdav subdirectory, fixes #4967 (#4969)Nico D'Cotta1-4/+4
* refactor webdav subdirectory, fixes #4967 * fix bug where Name() was not called on delegate wrappedFileInfo
2023-01-20grpc connection to filer add sw-client-id headerchrislu1-1/+1
2023-01-16use one readerCache for the whole filechrislu1-3/+6
2023-01-02more solid weed mount (#4089)Chris Lu1-15/+15
* compare chunks by timestamp * fix slab clearing error * fix test compilation * move oldest chunk to sealed, instead of by fullness * lock on fh.entryViewCache * remove verbose logs * revert slat clearing * less logs * less logs * track write and read by timestamp * remove useless logic * add entry lock on file handle release * use mem chunk only, swap file chunk has problems * comment out code that maybe used later * add debug mode to compare data read and write * more efficient readResolvedChunks with linked list * small optimization * fix test compilation * minor fix on writer * add SeparateGarbageChunks * group chunks into sections * turn off debug mode * fix tests * fix tests * tmp enable swap file chunk * Revert "tmp enable swap file chunk" This reverts commit 985137ec472924e4815f258189f6ca9f2168a0a7. * simple refactoring * simple refactoring * do not re-use swap file chunk. Sealed chunks should not be re-used. * comment out debugging facilities * either mem chunk or swap file chunk is fine now * remove orderedMutex as *semaphore.Weighted not found impactful * optimize size calculation for changing large files * optimize performance to avoid going through the long list of chunks * still problems with swap file chunk * rename * tiny optimization * swap file chunk save only successfully read data * fix * enable both mem and swap file chunk * resolve chunks with range * rename * fix chunk interval list * also change file handle chunk group when adding chunks * pick in-active chunk with time-decayed counter * fix compilation * avoid nil with empty fh.entry * refactoring * rename * rename * refactor visible intervals to *list.List * refactor chunkViews to *list.List * add IntervalList for generic interval list * change visible interval to use IntervalList in generics * cahnge chunkViews to *IntervalList[*ChunkView] * use NewFileChunkSection to create * rename variables * refactor * fix renaming leftover * renaming * renaming * add insert interval * interval list adds lock * incrementally add chunks to readers Fixes: 1. set start and stop offset for the value object 2. clone the value object 3. use pointer instead of copy-by-value when passing to interval.Value 4. use insert interval since adding chunk could be out of order * fix tests compilation * fix tests compilation
2022-12-17add -filer.path to webdav command (#4061)lfhy1-2/+10
2022-11-15refactor filer_pb.Entry and filer.Entry to use GetChunks()chrislu1-3/+3
for later locking on reading chunks
2022-09-14go fmtchrislu1-8/+8
2022-09-14refactor(webdav_server): `modifiledTime` -> `modifiedTime` (#3676)Ryan Russell1-5/+5
Signed-off-by: Ryan Russell <git@ryanrussell.org> Signed-off-by: Ryan Russell <git@ryanrussell.org>
2022-08-23remove old raft servers if they don't answer to pings for too long (#3398)askeipx1-1/+1
* remove old raft servers if they don't answer to pings for too long add ping durations as options rename ping fields fix some todos get masters through masterclient raft remove server from leader use raft servers to ping them CheckMastersAlive for hashicorp raft only * prepare blocking ping * pass waitForReady as param * pass waitForReady through all functions * waitForReady works * refactor * remove unneeded params * rollback unneeded changes * fix
2022-08-20adjust typechrislu1-2/+1
2022-08-20webdav: retryable data chunk uploadchrislu1-50/+23
2022-08-20remove unused collection and replication from upload resultchrislu1-9/+6
2022-08-14webdav: fix nilchrislu1-2/+1
fix https://github.com/seaweedfs/seaweedfs/issues/3440
2022-08-04filer prefer volume server in same data center (#3405)Konstantin Lebedev1-0/+3
* initial prefer same data center https://github.com/seaweedfs/seaweedfs/issues/3404 * GetDataCenter * prefer same data center for ReplicationSource * GetDataCenterId * remove glog
2022-07-29move to https://github.com/seaweedfs/seaweedfschrislu1-9/+9
2022-06-27Fixes WebDAV 0-bytes files xdadrm1-0/+8
Fixes the issue where files created via WebDAV show as 0-bytes size when read via fuse.
2022-06-06filer: remove replication, collection, disk_type info from entry metadatachrislu1-10/+6
these metadata can change and are not used
2022-02-26use file size as max rangechrislu1-3/+2
2021-12-26use streaming mode for long poll grpc callschrislu1-7/+7
streaming mode would create separate grpc connections for each call. this is to ensure the long poll connections are properly closed.
2021-09-14go fmtChris Lu1-12/+12
2021-09-12change server address from string to a typeChris Lu1-5/+4
2021-09-06refactoringChris Lu1-1/+10
2021-07-19optimization: improve random range query for large filesorigin/remote_overlayChris Lu1-1/+1
2021-05-07add retry to assign volumeChris Lu1-17/+24
fix https://github.com/chrislusf/seaweedfs/issues/2056
2021-02-18webdav add replication settingChris Lu1-2/+3
fix https://github.com/chrislusf/seaweedfs/issues/1817
2021-02-09Merge branch 'master' into support_ssd_volumeChris Lu1-2/+6
2021-01-31webdav: can start together with "weed server" or "weed filer"Chris Lu1-0/+1
2021-01-31webdav: cache to version specific folderChris Lu1-1/+4
2021-01-28add back AdjustedUrl() related codeChris Lu1-0/+3
2021-01-24mount: outsideContainerClusterMode proxy through filerChris Lu1-4/+1
Running mount outside of the cluster would not need to expose all the volume servers to outside of the cluster. The chunk read and write will go through the filer.
2021-01-24Revert "mount: when outside cluster network, use filer as proxy to access ↵Chris Lu1-1/+4
volume servers" This reverts commit 096e088d7bb2a5dce7573b24c2d3006d1cb6f9ec.
2021-01-24mount: when outside cluster network, use filer as proxy to access volume serversChris Lu1-4/+1
2020-12-16go fmtChris Lu1-2/+2
2020-12-13rename from volumeType to diskTypeChris Lu1-2/+2
2020-12-13adding volume typeChris Lu1-0/+2
2020-12-01fix testsChris Lu1-1/+1