aboutsummaryrefslogtreecommitdiff
path: root/weed/replication/repl_util
AgeCommit message (Collapse)AuthorFilesLines
2025-10-24Clients to volume server requires JWT tokens for all read operations (#7376)Chris Lu1-1/+3
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error * When jwt.signing.read.key is enabled in security.toml, the volume server requires JWT tokens for all read operations. * reuse fileId * refactor --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-05-28Add context with request (#6824)Aleksey Kosov1-2/+3
2024-07-16Added tls for http clients (#5766)vadimartynov1-2/+2
* Added global http client * Added Do func for global http client * Changed the code to use the global http client * Fix http client in volume uploader * Fixed pkg name * Fixed http util funcs * Fixed http client for bench_filer_upload * Fixed http client for stress_filer_upload * Fixed http client for filer_server_handlers_proxy * Fixed http client for command_fs_merge_volumes * Fixed http client for command_fs_merge_volumes and command_volume_fsck * Fixed http client for s3api_server * Added init global client for main funcs * Rename global_client to client * Changed: - fixed NewHttpClient; - added CheckIsHttpsClientEnabled func - updated security.toml in scaffold * Reduce the visibility of some functions in the util/http/client pkg * Added the loadSecurityConfig function * Use util.LoadSecurityConfiguration() in NewHttpClient func
2023-01-02more solid weed mount (#4089)Chris Lu1-3/+4
* compare chunks by timestamp * fix slab clearing error * fix test compilation * move oldest chunk to sealed, instead of by fullness * lock on fh.entryViewCache * remove verbose logs * revert slat clearing * less logs * less logs * track write and read by timestamp * remove useless logic * add entry lock on file handle release * use mem chunk only, swap file chunk has problems * comment out code that maybe used later * add debug mode to compare data read and write * more efficient readResolvedChunks with linked list * small optimization * fix test compilation * minor fix on writer * add SeparateGarbageChunks * group chunks into sections * turn off debug mode * fix tests * fix tests * tmp enable swap file chunk * Revert "tmp enable swap file chunk" This reverts commit 985137ec472924e4815f258189f6ca9f2168a0a7. * simple refactoring * simple refactoring * do not re-use swap file chunk. Sealed chunks should not be re-used. * comment out debugging facilities * either mem chunk or swap file chunk is fine now * remove orderedMutex as *semaphore.Weighted not found impactful * optimize size calculation for changing large files * optimize performance to avoid going through the long list of chunks * still problems with swap file chunk * rename * tiny optimization * swap file chunk save only successfully read data * fix * enable both mem and swap file chunk * resolve chunks with range * rename * fix chunk interval list * also change file handle chunk group when adding chunks * pick in-active chunk with time-decayed counter * fix compilation * avoid nil with empty fh.entry * refactoring * rename * rename * refactor visible intervals to *list.List * refactor chunkViews to *list.List * add IntervalList for generic interval list * change visible interval to use IntervalList in generics * cahnge chunkViews to *IntervalList[*ChunkView] * use NewFileChunkSection to create * rename variables * refactor * fix renaming leftover * renaming * renaming * add insert interval * interval list adds lock * incrementally add chunks to readers Fixes: 1. set start and stop offset for the value object 2. clone the value object 3. use pointer instead of copy-by-value when passing to interval.Value 4. use insert interval since adding chunk could be out of order * fix tests compilation * fix tests compilation
2022-07-29move to https://github.com/seaweedfs/seaweedfschrislu1-4/+4
2022-02-28filer.backup: fix backing up encrypted chunkschrislu1-1/+1
I have done filer.backup test: replication.toml: [sink.local] enabled = true directory = "/srv/test" ___ system@dat1:/srv/test$ weed filer.backup -filer=app1:8888 -filerProxy I0228 12:39:28 19571 filer_replication.go:129] Configure sink to local I0228 12:39:28 19571 filer_backup.go:98] resuming from 2022-02-28 12:04:20.210984693 +0100 CET I0228 12:39:29 19571 filer_backup.go:113] backup app1:8888 progressed to 2022-02-28 12:04:20.211726749 +0100 CET 0.33/sec system@dat1:/srv/test$ ls -l total 16 drwxr-xr-x 2 system system 4096 Feb 28 12:39 a -rw-r--r-- 1 system system 48 Feb 28 12:39 fu.txt -rw-r--r-- 1 system system 32 Feb 28 12:39 _index.html -rw-r--r-- 1 system system 68 Feb 28 12:39 index.php system@dat1:/srv/test$ cat fu.txt ? ?=?^??`?f^};?{4?Z%?X0=??rV????|"?1??θΈͺ~?? system@dat1:/srv/test$ On the active mount on the target server it's: system@app1:/srv/app$ ls -l total 2 drwxrwxr-x 1 system system 0 Feb 28 12:04 a -rw-r--r-- 1 system system 20 Feb 28 12:04 fu.txt -rw-r--r-- 1 system system 4 Feb 28 12:04 _index.html -rw-r--r-- 1 system system 40 Feb 28 12:04 index.php system@app1:/srv/app$ cat fu.txt This is static boy! Filer was started with: weed filer master="app1:9333,app2:9333,app3:9333" -encryptVolumeData It seems like it's still encrypted?
2021-11-28revertChris Lu1-1/+1
2021-11-28read deleted chunks when replcating dataChris Lu1-1/+1
2021-03-16revert fasthttp changesChris Lu1-1/+1
related to https://github.com/chrislusf/seaweedfs/issues/1907
2021-02-28rename fileChris Lu1-1/+1
2021-02-12use fasthttp lib to readChris Lu1-1/+1
2021-02-03RabbitMQ delay retry with Dead Letter ExchangeKonstantin Lebedev1-4/+6
https://github.com/chrislusf/seaweedfs/issues/1773 https://github.com/google/go-cloud/issues/2952
2020-10-13Only wait on retryable requestsChris Lu1-1/+1
2020-10-07read from alternative replicaChris Lu1-0/+40
related to https://github.com/chrislusf/seaweedfs/issues/1512