It is common for a system admin to think that their disks have much better throughput than syncio. Have a look at why this might be the case.
Test it yourself
NOTE: We'd like to encourage you to test any disk system you are considering using with the OpenEdge database, preferably before you commit to it. Ideally, you would use the syncio test as provided, but that requires OpenEdge and ProTop to be installed and may not be practical. Instead, we have provided below a simple dd test that is nearly equivalent.
If you already have OpenEdge and ProTop installed, at the top of bin/syncio.[sh|bat], there is an example of "dd" that should provide very similar results to syncio. For those of you who do not have ProTop, the command is:
dd if=/dev/zero of=./test.out bs=16k count=6144 oflag=dsync
of=./test.out # point this to the directory you are testing (db, ai and bi files); same given to synciobs=16k # this is blocksize syncio uses to grow the before-image filecount=6144 # number of 16k blocks to be added to be equivalent to the bigrow (syncio) testoflag=dsync # these writes MUST be synchronous
Don't be fooled by "burst"
Why doesn't the dd test match my syncio values?
- The "guarantee" from the provider wasn't a guarantee
- The same filesystem was not being tested by dd and syncio
- The other workload on the target filesystem was not equivalent when the comparisons were made
- One of the tests took advantage of "burst mode."
- The parameters given to "dd" are not equivalent to what OpenEdge is doing
- The version of "dd" being used may not support the parameters required
What should my syncio speed be?
From a write-intensive OLTP database point of view, and looking at the syncio trends in the ProTop Portal:
- < 10 MB / second = very bad; replace the db IO subsystem ASAP
- 10MB / second = barely acceptable, order storage now
- 20MB / second = Start looking for better storage
- 30MB / second = Start thinking about replacing the storage subsystem
- 100MB / second = excellent!
CAVEAT: On less write-intensive systems where users are not complaining of slowness, look at MTX latch waits. MTX latch requests are made when a process needs to write to disk. If MTX latch waits are low or nonexistent, make a note that your IO subsystem might be a future point of contention and evaluate it more closely as your business grows.