aboutsummaryrefslogtreecommitdiff
path: root/weed/filer/filechunks.go
AgeCommit message (Collapse)AuthorFilesLines
2025-05-28Add context with request (#6824)Aleksey Kosov1-9/+10
2023-01-10refactorchrislu1-2/+3
2023-01-06mount: faster add chunkschrislu1-0/+11
2023-01-02more solid weed mount (#4089)Chris Lu1-138/+110
* compare chunks by timestamp * fix slab clearing error * fix test compilation * move oldest chunk to sealed, instead of by fullness * lock on fh.entryViewCache * remove verbose logs * revert slat clearing * less logs * less logs * track write and read by timestamp * remove useless logic * add entry lock on file handle release * use mem chunk only, swap file chunk has problems * comment out code that maybe used later * add debug mode to compare data read and write * more efficient readResolvedChunks with linked list * small optimization * fix test compilation * minor fix on writer * add SeparateGarbageChunks * group chunks into sections * turn off debug mode * fix tests * fix tests * tmp enable swap file chunk * Revert "tmp enable swap file chunk" This reverts commit 985137ec472924e4815f258189f6ca9f2168a0a7. * simple refactoring * simple refactoring * do not re-use swap file chunk. Sealed chunks should not be re-used. * comment out debugging facilities * either mem chunk or swap file chunk is fine now * remove orderedMutex as *semaphore.Weighted not found impactful * optimize size calculation for changing large files * optimize performance to avoid going through the long list of chunks * still problems with swap file chunk * rename * tiny optimization * swap file chunk save only successfully read data * fix * enable both mem and swap file chunk * resolve chunks with range * rename * fix chunk interval list * also change file handle chunk group when adding chunks * pick in-active chunk with time-decayed counter * fix compilation * avoid nil with empty fh.entry * refactoring * rename * rename * refactor visible intervals to *list.List * refactor chunkViews to *list.List * add IntervalList for generic interval list * change visible interval to use IntervalList in generics * cahnge chunkViews to *IntervalList[*ChunkView] * use NewFileChunkSection to create * rename variables * refactor * fix renaming leftover * renaming * renaming * add insert interval * interval list adds lock * incrementally add chunks to readers Fixes: 1. set start and stop offset for the value object 2. clone the value object 3. use pointer instead of copy-by-value when passing to interval.Value 4. use insert interval since adding chunk could be out of order * fix tests compilation * fix tests compilation
2022-11-30Return ETag from remote when file doesn't exist on Filer (#4025)aronneagu1-0/+3
2022-11-15refactor filer_pb.Entry and filer.Entry to use GetChunks()chrislu1-3/+3
for later locking on reading chunks
2022-10-28refactor filer proto chunk variable from mtime to modified_ts_nschrislu1-8/+8
2022-08-01filer.sync: fix synchronization logic in active-active modechrislu1-1/+4
fix https://github.com/seaweedfs/seaweedfs/issues/3328
2022-07-29move to https://github.com/seaweedfs/seaweedfschrislu1-3/+3
2022-07-07minordevchrislu1-3/+3
2022-07-07remove dead codechrislu1-9/+1
2022-06-19fix: invalid chunk data when failed to read manifestsgeekboood1-2/+6
2022-04-18enhancement: replace sort.Slice with slices.SortFunc to reduce reflectionjustin1-10/+8
2022-04-05prevent nilchrislu1-0/+3
2022-03-21mount: set file size if it is only on remote gatewaychrislu1-1/+7
2022-02-07filer.sync: fix replicating partially updated filechrislu1-0/+15
Run two servers with volumes and fillers: server -dir=Server1alpha -master.port=11000 -filer -filer.port=11001 -volume.port=11002 server -dir=Server1sigma -master.port=11006 -filer -filer.port=11007 -volume.port=11008 Run Active-Passive filler.sync: filer.sync -a localhost:11007 -b localhost:11001 -isActivePassive Upload file to 11007 port: curl -F file=@/Desktop/9.xml "http://localhost:11007/testFacebook/" If we request a file on two servers now, everything will be correct, even if we add data to the file and upload it again: curl "http://localhost:11007/testFacebook/9.xml" EQUALS curl "http://localhost:11001/testFacebook/9.xml" However, if we change the already existing data in the file (for example, we change the first line in the file, reducing its length), then this file on the second server will not be valid and will not be equivalent to the first file Снимок экрана 2022-02-07 в 14 21 11 This problem occurs on line 202 in the filer_sink.go file. In particular, this is due to incorrect mapping of chunk names in the DoMinusChunks function. The names of deletedChunks do not match the chunks of existingEntry.Chunks, since the first chunks come from another server and have a different addressing (name) compared to the addressing on the server where the file is being overwritten. Deleted chunks are not actually deleted on the server to which the file is replicated.
2021-10-16turn on new faster algorithm to translate into visible chunksChris Lu1-1/+1
2021-10-16temporarily revertingChris Lu1-1/+1
2021-10-16Revert "remove deprecated code"Chris Lu1-3/+115
This reverts commit de7688c53989d90799e349ac105d3c8d4a06e6a7.
2021-10-16remove deprecated codeChris Lu1-115/+3
2021-10-16faster file read for large filesChris Lu1-0/+23
2021-07-19optimization: improve random range query for large filesorigin/remote_overlayChris Lu1-6/+6
2021-04-28fix aws style Etag for chunksKonstantin Lebedev1-4/+2
2021-03-16reverting 7d57664c2d80f2b7d3eb4cecc57a3275bafee44dChris Lu1-4/+0
2021-03-12mount: internals switch to filer.Entry instead of protobufChris Lu1-0/+4
2021-01-06add "weed filer.cat" to read files directly from volume serversChris Lu1-4/+5
2020-10-05We return etag using the same algorithm as aws s3Konstantin Lebedev1-5/+7
https://teppen.io/2018/06/23/aws_s3_etags/
2020-09-09filer: cross cluster synchronizationChris Lu1-0/+5
2020-09-01rename filer2 to filerChris Lu1-0/+284