using realm 1.0.2 on os x, have realm file reached ~3.5 gb. now, writing batch of new objects takes around 30s - 1min on average, makes things pretty slow. after profiling, looks commitwritetransaction taking big chunk of time.
is performance normal / expected in case? , if so, strategies available make saving time faster?
realm uses copy-on-write semantics whenever changes performed in write transactions.
the larger structure has forked & copied, longer take perform operation.
this small unscientific benchmark on 2.8ghz i7 macbook pro
import foundation import realmswift class model: object { dynamic var prop1 = 0.0 dynamic var prop2 = 0.0 } // add 60 million objects 2 double properties in batches of 10 million autoreleasepool { _ in 0..<6 { let start = nsdate() let realm = try! realm() try! realm.write { _ in 0..<10_000_000 { realm.add(model()) } } print(realm.objects(model.self).count) print("took \(-start.timeintervalsincenow)s") } } // add 1 item realm autoreleasepool { let start = nsdate() let realm = try! realm() try! realm.write { realm.add(model()) } print(realm.objects(model.self).count) print("took \(-start.timeintervalsincenow)s") }
logs following:
10000000 took 25.6072470545769s 20000000 took 23.7239990234375s 30000000 took 24.4556020498276s 40000000 took 23.9790390133858s 50000000 took 24.5923230051994s 60000000 took 24.2157150506973s 60000001 took 0.0106720328330994s
so can see adding many objects realm, no relationships, quite fast , stays linearly proportional number of objects being added.
so it's you're doing more adding objects realm, maybe you're updating existing objects, causing them copied?
if you're reading value objects part of write transactions, grow proportionally number of objects.
avoiding these things shorten write transactions.
Comments
Post a Comment