i have written api retrieves data mongodb database. have front-end application uses data api (both applications written in node js using koa framework if it's relevant)
i need aggregation of large set of numerical data, on given period need calculating (averaging, quintiles etc), , data grouped month, grouped year or personid.
i've read examples people api should used wrapper database layer, presenting access raw data - makes sense me logic live on database (rather asking front-end application churn on data).
is common problem, , own experience - better api aggregation, or front-end application?
example documents
{ "date": isodate("2016-07-31t07:34:05+01:00z"), "value": 5, "personid": 123 }, { "date": isodate("2016-08-01t12:53:05+01:00z"), "value": 3, "personid": 789 }
there's 2 perspectives can approach from: security or performance
from security angle, data put on front-end considered, security purposes, "dirty". means if accept input whatsoever, have throw out assumptions input remotely valid. large data-sets, need form of validation on each of create/update operations. while @ first glance might think putting things on client-side takes load off server, unless want exploits everywhere, you're still doing sort of iteration on data, if validate it.
from performance angle, moving large data sets to client going happen either way, same size doesn't need come back. keeping operations on server means update style operations smaller, don't need move entire data-set on wire can. take step further, can guarantee @ least you'll have control on performance of operations, if offload on client, you're going have support every client's machine degree, nightmare.
tl;dr: security & performance dictate heavily in favor of server side operations, on large data-sets.
Comments
Post a Comment