aboutsummaryrefslogtreecommitdiff
path: root/docs/benchmarks.rst
blob: 6af7ee89ae3db2a60bc4667d442998c99c480161 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
Benchmarks
======================

Do we really need the benchmark? People always use benchmark to compare systems. 
But benchmarks are misleading. The resources, e.g., CPU, disk, memory, network, 
all matter a lot. And with Seaweed File System, single node vs multiple nodes, 
benchmarking on one machine vs several multiple machines, all matter a lot.

Here is the steps on how to run benchmark if you really need some numbers.

Unscientific Single machine benchmarking
##################################################

I start weed servers in one console for simplicity. Better run servers on different consoles.

For more realistic tests, please start them on different machines.

.. code-block:: bash

  # prepare directories
  mkdir 3 4 5
  # start 3 servers
  ./weed server -dir=./3 -master.port=9333 -volume.port=8083 &
  ./weed volume -dir=./4 -port=8084 &
  ./weed volume -dir=./5 -port=8085 &
  ./weed benchmark -server=localhost:9333

What does the test do?
#############################

By default, the benchmark command would start writing 1 million files, each having 1KB size, uncompressed. 
For each file, one request is sent to assign a file key, and a second request is sent to post the file to the volume server. 
The written file keys are stored in a temp file.

Then the benchmark command would read the list of file keys, randomly read 1 million files. 
For each volume, the volume id is cached, so there is several request to lookup the volume id, 
and all the rest requests are to get the file content.

Many options are options are configurable. Please check the help content:

.. code-block:: bash

  ./weed benchmark -h

Different Benchmark Target
###############################

The default "weed benchmark" uses 1 million 1KB file. This is to stress the number of files per second. 
Increasing the file size to 100KB or more can show much larger number of IO throughput in KB/second.

My own unscientific single machine results
###################################################

My Own Results on Mac Book with Solid State Disk, CPU: 1 Intel Core i7 at 2.2GHz.

.. code-block:: bash

  Write 1 million 1KB file:

  Concurrency Level:      64
  Time taken for tests:   182.456 seconds
  Complete requests:      1048576
  Failed requests:        0
  Total transferred:      1073741824 bytes
  Requests per second:    5747.01 [#/sec]
  Transfer rate:          5747.01 [Kbytes/sec]

  Connection Times (ms)
                min      avg        max      std
  Total:        0.3      10.9       430.9      5.7

  Percentage of the requests served within a certain time (ms)
     50%     10.2 ms
     66%     12.0 ms
     75%     12.6 ms
     80%     12.9 ms
     90%     14.0 ms
     95%     14.9 ms
     98%     16.2 ms
     99%     17.3 ms
    100%    430.9 ms
  Randomly read 1 million files:

  Concurrency Level:      64
  Time taken for tests:   80.732 seconds
  Complete requests:      1048576
  Failed requests:        0
  Total transferred:      1073741824 bytes
  Requests per second:    12988.37 [#/sec]
  Transfer rate:          12988.37 [Kbytes/sec]

  Connection Times (ms)
                min      avg        max      std
  Total:        0.0      4.7       254.3      6.3

  Percentage of the requests served within a certain time (ms)
     50%      2.6 ms
     66%      2.9 ms
     75%      3.7 ms
     80%      4.7 ms
     90%     10.3 ms
     95%     16.6 ms
     98%     26.3 ms
     99%     34.8 ms
    100%    254.3 ms

My own replication 001 single machine results
##############################################

Create benchmark volumes directly

.. code-block:: bash

  curl "http://localhost:9333/vol/grow?collection=benchmark&count=3&replication=001&pretty=y"
  # Later, after finishing the test, remove the benchmark collection
  curl "http://localhost:9333/col/delete?collection=benchmark&pretty=y"
  
  Write 1million 1KB files results:

  Concurrency Level:      64
  Time taken for tests:   174.949 seconds
  Complete requests:      1048576
  Failed requests:        0
  Total transferred:      1073741824 bytes
  Requests per second:    5993.62 [#/sec]
  Transfer rate:          5993.62 [Kbytes/sec]

  Connection Times (ms)
                min      avg        max      std
  Total:        0.3      10.4       296.6      4.4

  Percentage of the requests served within a certain time (ms)
     50%      9.7 ms
     66%     11.5 ms
     75%     12.1 ms
     80%     12.4 ms
     90%     13.4 ms
     95%     14.3 ms
     98%     15.5 ms
     99%     16.7 ms
    100%    296.6 ms
  Randomly read results:

  Concurrency Level:      64
  Time taken for tests:   53.987 seconds
  Complete requests:      1048576
  Failed requests:        0
  Total transferred:      1073741824 bytes
  Requests per second:    19422.81 [#/sec]
  Transfer rate:          19422.81 [Kbytes/sec]

  Connection Times (ms)
                min      avg        max      std
  Total:        0.0      3.0       256.9      3.8

  Percentage of the requests served within a certain time (ms)
     50%      2.7 ms
     66%      2.9 ms
     75%      3.2 ms
     80%      3.5 ms
     90%      4.4 ms
     95%      5.6 ms
     98%      7.4 ms
     99%      9.4 ms
    100%    256.9 ms
How can the replication 001 writes faster than no replication?
I could not tell. Very likely, the computer was in turbo mode. 
I can not reproduce it consistently either. Posted the number here just to illustrate that number lies. 
Don't quote on the exact number, just get an idea of the performance would be good enough.