Optimizing queries over date ranges in MongoDb (with other fields) -
i'm building scheduling/booking system bunch of providers serve customers @ bunch of locations on times.
my appointment object looks this:
{ "providername" : {"type": "string"}, "location" : {"type": "string"}, "starttime" : {"type": "tba"}, "endtime" : {"type": "tba"}, }
the appointments stored in mongodb , i'll need run lot of searches 4 fields set (there 10-20 searches each insertion, i'm optimizing reads here).
i'm thinking store start , end times in milliseconds, , set compound index on these two, set 2 separate indices on provider name , location.
but i'm wondering whether can improve performance (end reduce load on server) using optimizations:
what's more efficient range queries - storing start/end time milliseconds/long or java.util.date or maybe string?
if i'm interested in searches date only, perhaps should represent time string (e.g. 2015/10/11) , search lexicographic range? , in case, if i'm looking range of 10 days, better off single range query or running 10 threads each 1 hashed key search each date?
do want use compound index , whether can optimize mongodb search strategy (e.g. want minimize number of documents scanned)?
thanks!
Comments
Post a Comment