[proposal] Parallelize block pruning of default datamap in driver for filter query processing.
I want to propose *"Parallelize block pruning of default datamap in driver
for filter query processing"*
We do block pruning for the filter queries at the driver side.
In real time big data scenario, we can have millions of carbon files for
one carbon table.
It is currently observed that for 1 million carbon files it takes around 5
seconds for block pruning. As each carbon file takes around 0.005ms for
(with only one filter columns set in 'column_meta_cache' tblproperty).
If the files are more, we might take more time for block pruning.
Also, spark Job will not be launched until block pruning is completed. so,
the user will not know what is happening at that time and why spark job is
currently, block pruning is taking time as each segment processing is
happening sequentially. we can reduce the time by parallelizing it.
*solution:*Keep default number of threads for block pruning as 4.
User can reduce this number by a carbon property
"carbon.max.driver.threads.for.pruning" to set between -> 1 to 4.
group the segments as per the threads by distributing equal carbon files to
Launch the threads for a group of segments to handle block pruning.
> Yes, I will be handling this for all types of datamap pruning in the same
> flow when I am done with default datamap's implementation and testing.
> On Fri, Nov 23, 2018 at 6:36 AM xuchuanyin <[hidden email]> wrote:
> > 'Parallelize pruning' is in my plan long time ago, nice to see your
> > proposal
> > here.
> > While implementing this, I'd like you to make it common, that is to say
> > only default datamap but also other index datamaps can also use
> > pruning.
> > --
> > Sent from:
> > http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/ > >