When I am tuning carbon performance, very often that I want to check the metadata in carbon files without launching spark shell or sql. In order to do that, I am writing a tool to print metadata information of a given data folder.
Currently, I am planning to do like this:
-a,--all print all information
-b,--tblProperties print table properties
-c,--column <column name> column to print statistics
-cmd <command name> command to execute, supported commands are:
-d,--detailSize print each blocklet size
-h,--help print this message
-m,--showSegment print segment information
-p,--path <path> the path which contains carbondata files,
nested folder is supported
-s,--schema print the schema
In first phase, I think “summary” command is high priority, and developers can add more command in the future.
Summary command example as below, one good thing is that it can print out the column min/max value in percentage visually by using “———“, so that user have better understanding of the effectiveness of the carbon sort_columns set in create table.
Please suggest if you have any good idea on this tool.
SegmentID Status Load Start Load End Merged To Format Data Size Index Size
0 Marked for Delete 2018-08-31 2018-08-31 NA COLUMNAR_V3 NA NA
1 Success 2018-08-31 2018-08-31 NA COLUMNAR_V3 514.64MB 6.40KB
## Table Properties
Property Name Property Value
In the above example, you specify one directory and get two segments.
But it only shows one schema info. I thought the number of the schema is the
same as that of data directories. Since you mentioned that we can support
nested folder, what if the schema in these files are not the same?
SegmentID Status Load Start Load End Merged To Format
Data Size Index Size
0 Marked for Delete 2018-08-31 2018-08-31 NA COLUMNAR_V3
Why The datasize for segment#0 is NA? Will it affect the total data size of
the carbon table?
For “summary” command, I just pick the first carbondata file and read the schema from its header.
The intention here is just to show one schema, assuming all schema in all data files in this folder is the same.
If there is need to validate schema in all files, we can add a “validate” command. It will be easy to add in CarbonCli tool
I have raised PR2683 for this feature.
> 在 2018年9月5日，上午10:23，xuchuanyin <[hidden email]> 写道：
> In the above example, you specify one directory and get two segments.
> But it only shows one schema info. I thought the number of the schema is the
> same as that of data directories. Since you mentioned that we can support
> nested folder, what if the schema in these files are not the same?
> Another problem:
> SegmentID Status Load Start Load End Merged To Format
> Data Size Index Size
> 0 Marked for Delete 2018-08-31 2018-08-31 NA COLUMNAR_V3
> NA NA
> Why The datasize for segment#0 is NA? Will it affect the total data size of
> the carbon table?
> Sent from: http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/ >
> 在 2018年9月5日，下午3:46，ravipesala <[hidden email]> 写道：
> Hi ,
> I have following doubts and suggestions for this tool.
> 1. To which module you are planning to keep this tool. Ideally, it should be
> under tools folder and going forward we can add more tools like this under
Sure, I will create a tool module and put it there.
> 2. Which file schema are you printing? are you randomly choosing the file to
> read? Better we can also take the file name input to read schema from that
> file. It will be useful for debugging.
I am printing the schema in first file. OK, print schema for specified file can be added
> 3. I don't get how the percentage calculated with min/max ? And how it will
> be helpful to user. Can you give some example?
It will print whether the min/max in each blocklet by “———“, then user can know whether they are overlapping. If they are overlap, it means the min/max is not so effective.
> 4. It would be better to print each columns size also. It will be helpful
> for debugging.
Yes, it will print the column size when you give “-c columnName"