java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

xm_zzc
Hi all:
  Please help. I directly ran a CarbonData demo program on Eclipse, which copy from carbondata-examples-spark2/src/main/scala/org/apache/carbondata/examples/CarbonSessionExample.scala, but the error occurred, as follows:

Exception in thread "main" java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write
        at java.io.FileOutputStream.open0(Native Method)
        at java.io.FileOutputStream.open(FileOutputStream.java:270)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:101)
        at org.apache.carbondata.core.datastore.impl.FileFactory.getDataOutputStream(FileFactory.java:188)
        at org.apache.carbondata.core.fileoperations.AtomicFileOperationsImpl.openForWrite(AtomicFileOperationsImpl.java:61)
        at org.apache.carbondata.core.writer.ThriftWriter.open(ThriftWriter.java:97)
        at org.apache.spark.sql.hive.CarbonMetastore.createSchemaThriftFile(CarbonMetastore.scala:412)
        at org.apache.spark.sql.hive.CarbonMetastore.createTableFromThrift(CarbonMetastore.scala:380)
        at org.apache.spark.sql.execution.command.CreateTable.run(carbonTableSchema.scala:166)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
        at cn.xm.zzc.carbontest.FirstCarbonData$.main(FirstCarbonData.scala:51)
        at cn.xm.zzc.carbontest.FirstCarbonData.main(FirstCarbonData.scala)

but I found a file named 'schema.write' in the path '/data/carbon_data/default/carbon_table/Metadata/', the file size is 0.

My program :

    val warehouseLocation = Constants.SPARK_WAREHOUSE
    val storeLocation = Constants.CARBON_FILES
   
    //CarbonProperties.getInstance()
    //  .addProperty("carbon.storelocation", storeLocation)
    CarbonProperties.getInstance()
      .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
   
    import org.apache.spark.sql.CarbonSession._
    val spark = SparkSession
      .builder()
      .appName("FirstCarbonData")
      .master("local")
      .config("spark.sql.warehouse.dir", warehouseLocation)
      //.config("javax.jdo.option.ConnectionURL",
      //  s"jdbc:derby:;databaseName=${Constants.METASTORE_DB};create=true")
      //.enableHiveSupport()
      .getOrCreateCarbonSession(storeLocation, Constants.METASTORE_DB)
   
    spark.sql("DROP TABLE IF EXISTS carbon_table")

    // Create table
    spark.sql(
      s"""
         | CREATE TABLE carbon_table(
         |    shortField short,
         |    intField int,
         |    bigintField long,
         |    doubleField double,
         |    stringField string,
         |    timestampField timestamp,
         |    decimalField decimal(18,2),
         |    dateField date,
         |    charField char(5),
         |    floatField float,
         |    complexData array<string>
         | )
         | STORED BY 'carbondata'
         | TBLPROPERTIES('DICTIONARY_INCLUDE'='dateField, charField')
       """.stripMargin)

    val path = "/home/myubuntu/Works/workspace_latest/incubator-carbondata/examples/spark2/src/main/resources/data.csv"

    // scalastyle:off
    spark.sql(
      s"""
         | LOAD DATA LOCAL INPATH '$path'
         | INTO TABLE carbon_table
         | options('FILEHEADER'='shortField,intField,bigintField,doubleField,stringField,timestampField,decimalField,dateField,charField,floatField,complexData','COMPLEX_DELIMITER_LEVEL_1'='#')
       """.stripMargin)
    // scalastyle:on

    spark.sql("""
             SELECT *
             FROM carbon_table
             where stringfield = 'spark' and decimalField > 40
              """).show

    spark.sql("""
             SELECT *
             FROM carbon_table where length(stringField) = 5
              """).show

    spark.sql("""
             SELECT *
             FROM carbon_table where date_format(dateField, "yyyy-MM-dd") = "2015-07-23"
              """).show

    spark.sql("""
             select count(stringField) from carbon_table
              """.stripMargin).show

    spark.sql("""
           SELECT sum(intField), stringField
           FROM carbon_table
           GROUP BY stringField
              """).show

    spark.sql(
      """
        |select t1.*, t2.*
        |from carbon_table t1, carbon_table t2
        |where t1.stringField = t2.stringField
      """.stripMargin).show

    spark.sql(
      """
        |with t1 as (
        |select * from carbon_table
        |union all
        |select * from carbon_table
        |)
        |select t1.*, t2.*
        |from t1, carbon_table t2
        |where t1.stringField = t2.stringField
      """.stripMargin).show

    spark.sql("""
             SELECT *
             FROM carbon_table
             where stringfield = 'spark' and floatField > 2.8
              """).show

    // Drop table
    // spark.sql("DROP TABLE IF EXISTS carbon_table")
   
    spark.stop()
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

Liang Chen
Administrator
Hi

Please check if you have the right for the directory: Constants.METASTORE_DB
you can use "chmod" to add right.

Regards
Liang

xm_zzc wrote
Hi all:
  Please help. I directly ran a CarbonData demo program on Eclipse, which copy from carbondata-examples-spark2/src/main/scala/org/apache/carbondata/examples/CarbonSessionExample.scala, but the error occurred, as follows:

Exception in thread "main" java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write
        at java.io.FileOutputStream.open0(Native Method)
        at java.io.FileOutputStream.open(FileOutputStream.java:270)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:101)
        at org.apache.carbondata.core.datastore.impl.FileFactory.getDataOutputStream(FileFactory.java:188)
        at org.apache.carbondata.core.fileoperations.AtomicFileOperationsImpl.openForWrite(AtomicFileOperationsImpl.java:61)
        at org.apache.carbondata.core.writer.ThriftWriter.open(ThriftWriter.java:97)
        at org.apache.spark.sql.hive.CarbonMetastore.createSchemaThriftFile(CarbonMetastore.scala:412)
        at org.apache.spark.sql.hive.CarbonMetastore.createTableFromThrift(CarbonMetastore.scala:380)
        at org.apache.spark.sql.execution.command.CreateTable.run(carbonTableSchema.scala:166)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
        at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
        at cn.xm.zzc.carbontest.FirstCarbonData$.main(FirstCarbonData.scala:51)
        at cn.xm.zzc.carbontest.FirstCarbonData.main(FirstCarbonData.scala)

but I found a file named 'schema.write' in the path '/data/carbon_data/default/carbon_table/Metadata/', the file size is 0.

My program :

    val warehouseLocation = Constants.SPARK_WAREHOUSE
    val storeLocation = Constants.CARBON_FILES
   
    //CarbonProperties.getInstance()
    //  .addProperty("carbon.storelocation", storeLocation)
    CarbonProperties.getInstance()
      .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, "yyyy/MM/dd")
   
    import org.apache.spark.sql.CarbonSession._
    val spark = SparkSession
      .builder()
      .appName("FirstCarbonData")
      .master("local")
      .config("spark.sql.warehouse.dir", warehouseLocation)
      //.config("javax.jdo.option.ConnectionURL",
      //  s"jdbc:derby:;databaseName=${Constants.METASTORE_DB};create=true")
      //.enableHiveSupport()
      .getOrCreateCarbonSession(storeLocation, Constants.METASTORE_DB)
   
    spark.sql("DROP TABLE IF EXISTS carbon_table")

    // Create table
    spark.sql(
      s"""
         | CREATE TABLE carbon_table(
         |    shortField short,
         |    intField int,
         |    bigintField long,
         |    doubleField double,
         |    stringField string,
         |    timestampField timestamp,
         |    decimalField decimal(18,2),
         |    dateField date,
         |    charField char(5),
         |    floatField float,
         |    complexData array<string>
         | )
         | STORED BY 'carbondata'
         | TBLPROPERTIES('DICTIONARY_INCLUDE'='dateField, charField')
       """.stripMargin)

    val path = "/home/myubuntu/Works/workspace_latest/incubator-carbondata/examples/spark2/src/main/resources/data.csv"

    // scalastyle:off
    spark.sql(
      s"""
         | LOAD DATA LOCAL INPATH '$path'
         | INTO TABLE carbon_table
         | options('FILEHEADER'='shortField,intField,bigintField,doubleField,stringField,timestampField,decimalField,dateField,charField,floatField,complexData','COMPLEX_DELIMITER_LEVEL_1'='#')
       """.stripMargin)
    // scalastyle:on

    spark.sql("""
             SELECT *
             FROM carbon_table
             where stringfield = 'spark' and decimalField > 40
              """).show

    spark.sql("""
             SELECT *
             FROM carbon_table where length(stringField) = 5
              """).show

    spark.sql("""
             SELECT *
             FROM carbon_table where date_format(dateField, "yyyy-MM-dd") = "2015-07-23"
              """).show

    spark.sql("""
             select count(stringField) from carbon_table
              """.stripMargin).show

    spark.sql("""
           SELECT sum(intField), stringField
           FROM carbon_table
           GROUP BY stringField
              """).show

    spark.sql(
      """
        |select t1.*, t2.*
        |from carbon_table t1, carbon_table t2
        |where t1.stringField = t2.stringField
      """.stripMargin).show

    spark.sql(
      """
        |with t1 as (
        |select * from carbon_table
        |union all
        |select * from carbon_table
        |)
        |select t1.*, t2.*
        |from t1, carbon_table t2
        |where t1.stringField = t2.stringField
      """.stripMargin).show

    spark.sql("""
             SELECT *
             FROM carbon_table
             where stringfield = 'spark' and floatField > 2.8
              """).show

    // Drop table
    // spark.sql("DROP TABLE IF EXISTS carbon_table")
   
    spark.stop()
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

xm_zzc
Hi Liang Chen:
  Thanks for your reply.
  I have added the '777' right to paths of 'Constants.CARBON_FILES' and 'Constants.METASTORE_DB'. but the error still exists. I have not idea how to resolve.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

bhavya411
Hi,

Can you please provide me the details of path for the following

Constants.CARBON_FILES
Constants.METASTORE_DB

Also please let me know the permissions on /data/carbon_data directory,
does the user you are using to run the example has permissions on directory
/data or /data/carbondata


Regards
Bhavya

On Mon, Apr 17, 2017 at 1:34 PM, xm_zzc <[hidden email]> wrote:

> Hi Liang Chen:
>   Thanks for your reply.
>   I have added the '777' right to paths of 'Constants.CARBON_FILES' and
> 'Constants.METASTORE_DB'. but the error still exists. I have not idea how
> to
> resolve.
>
>
>
> --
> View this message in context: http://apache-carbondata-
> mailing-list-archive.1130556.n5.nabble.com/java-io-
> FileNotFoundException-file-data-carbon-data-default-
> carbon-table-Metadata-schema-write-tp11044p11154.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

xm_zzc
Hi:
  val METASTORE_DB: String = "/data/carbon_meta"
  val SPARK_WAREHOUSE: String = "file:///data/spark-warehouse-carbon"  
  val CARBON_FILES: String = "file:///data/carbon_data"

  The permissions on above paths:
  drwxrwxrwx  3 myubuntu myubuntu      4096  4月 17 17:50 /data/carbon_data/
  drwxrwxr-x  3 myubuntu myubuntu 4096  4月 17 17:50 /data/carbon_data/default/
  drwxrwxr-x 3 myubuntu myubuntu 4096  4月 17 17:50 /data/carbon_data/default/carbon_table/
  drwxrwxr-x 2 myubuntu myubuntu 4096  4月 17 17:50 /data/carbon_data/default/carbon_table/Metadata/
  -rw-rw-r-- 1 myubuntu myubuntu    0  4月 17 17:50 /data/carbon_data/default/carbon_table/Metadata/schema.write
  drwxrwxrwx  3 myubuntu myubuntu      4096  4月 17 17:50 carbon_meta/
  drwxrwxrwx  2 myubuntu myubuntu      4096  4月 17 17:49 spark-warehouse-carbon/

 
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

bhavya411
Please use the below properties and don't append file:// and try again.

val METASTORE_DB: String = "/data/carbon_meta"
val SPARK_WAREHOUSE: String = "/data/spark-warehouse-carbon"
val CARBON_FILES: String = "/data/carbon_data"

Regards
Bhavya

On Mon, Apr 17, 2017 at 3:24 PM, xm_zzc <[hidden email]> wrote:

> Hi:
>   val METASTORE_DB: String = "/data/carbon_meta"
>   val SPARK_WAREHOUSE: String = "file:///data/spark-warehouse-carbon"
>   val CARBON_FILES: String = "file:///data/carbon_data"
>
>   The permissions on above paths:
>   drwxrwxrwx  3 myubuntu myubuntu      4096  4月 17 17:50 /data/carbon_data/
>   drwxrwxr-x  3 myubuntu myubuntu 4096  4月 17 17:50
> /data/carbon_data/default/
>   drwxrwxr-x 3 myubuntu myubuntu 4096  4月 17 17:50
> /data/carbon_data/default/carbon_table/
>   drwxrwxr-x 2 myubuntu myubuntu 4096  4月 17 17:50
> /data/carbon_data/default/carbon_table/Metadata/
>   -rw-rw-r-- 1 myubuntu myubuntu    0  4月 17 17:50
> /data/carbon_data/default/carbon_table/Metadata/schema.write
>   drwxrwxrwx  3 myubuntu myubuntu      4096  4月 17 17:50 carbon_meta/
>   drwxrwxrwx  2 myubuntu myubuntu      4096  4月 17 17:49
> spark-warehouse-carbon/
>
>
>
>
>
> --
> View this message in context: http://apache-carbondata-
> mailing-list-archive.1130556.n5.nabble.com/java-io-
> FileNotFoundException-file-data-carbon-data-default-
> carbon-table-Metadata-schema-write-tp11044p11178.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

xm_zzc
Hi:
  After removing 'file://', the error 'FileNotFoundException' disappeared when loading data into table. But when I execute select query, there is an error: 'java.io.IOException: Dictionary file does not exist: /data/carbon_data/default/carbon_table/Metadata/3ca9a876-31e0-44cd-8497-fd190dbcd352.dictmeta'. I can't find this file in path: '/data/carbon_data/default/carbon_table/Metadata/'

$ ll /data/carbon_data/default/carbon_table/Metadata/
Total 52
drwxrwxr-x 2 myubuntu myubuntu 4096  4月 18 15:01 ./
drwxrwxr-x 4 myubuntu myubuntu 4096  4月 18 15:01 ../
-rw-rw-r-- 1 myubuntu myubuntu   15  4月 18 15:01 803c1c9a-56ba-43a3-847c-ae4a83dfcbe0_36.sortindex
-rw-rw-r-- 1 myubuntu myubuntu   36  4月 18 15:01 803c1c9a-56ba-43a3-847c-ae4a83dfcbe0.dict
-rw-rw-r-- 1 myubuntu myubuntu   11  4月 18 15:01 803c1c9a-56ba-43a3-847c-ae4a83dfcbe0.dictmeta
-rw-rw-r-- 1 myubuntu myubuntu   27  4月 18 15:01 ba631ef5-b01c-4556-a286-550bbc98be80_52.sortindex
-rw-rw-r-- 1 myubuntu myubuntu   52  4月 18 15:01 ba631ef5-b01c-4556-a286-550bbc98be80.dict
-rw-rw-r-- 1 myubuntu myubuntu   11  4月 18 15:01 ba631ef5-b01c-4556-a286-550bbc98be80.dictmeta
-rw-rw-r-- 1 myubuntu myubuntu   13  4月 18 15:01 e42fa72f-43dc-4a58-9a5f-0e23d1653665_32.sortindex
-rw-rw-r-- 1 myubuntu myubuntu   32  4月 18 15:01 e42fa72f-43dc-4a58-9a5f-0e23d1653665.dict
-rw-rw-r-- 1 myubuntu myubuntu   11  4月 18 15:01 e42fa72f-43dc-4a58-9a5f-0e23d1653665.dictmeta
-rw-rw-r-- 1 myubuntu myubuntu 1397  4月 18 15:01 schema
-rw-rw-r-- 1 myubuntu myubuntu  276  4月 18 15:01 tablestatus

What't the problem with this error?

BTW, Why i can not use the 'file://' prefix, how to configure above paths on HDFS when I use HDFS?

Thanks.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

bhavya411
Hi ,

Since you are running the example it is an issue but normally when you run
carbon data you only provide the carbon.store location and it accepts the
normal HDFS path e.g for running thrift server with carbon I generally use
the below command

/usr/local/spark-2.1/bin/spark-submit  --conf
spark.sql.hive.thriftServer.singleSession=true --class
org.apache.carbondata.spark.thriftserver.CarbonThriftServer
/usr/local/spark-2.1/carbonlib/carbondata_2.11-1.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar
*hdfs://localhost:54311/user/hive/warehouse/carbon21.store*

where Highlighted part is the HDFS path.
I am currently looking into the problem with your dictionary file and will
let you know.

Regards
Bhavya

On Tue, Apr 18, 2017 at 1:06 PM, xm_zzc <[hidden email]> wrote:

> Hi:
>   After removing 'file://', the error 'FileNotFoundException' disappeared
> when loading data into table. But when I execute select query, there is an
> error: 'java.io.IOException: Dictionary file does not exist:
> /data/carbon_data/default/carbon_table/Metadata/3ca9a876-31e0-44cd-8497-
> fd190dbcd352.dictmeta'.
> I can't find this file in path:
> '/data/carbon_data/default/carbon_table/Metadata/'
>
> $ ll /data/carbon_data/default/carbon_table/Metadata/
> Total 52
> drwxrwxr-x 2 myubuntu myubuntu 4096  4月 18 15:01 ./
> drwxrwxr-x 4 myubuntu myubuntu 4096  4月 18 15:01 ../
> -rw-rw-r-- 1 myubuntu myubuntu   15  4月 18 15:01
> 803c1c9a-56ba-43a3-847c-ae4a83dfcbe0_36.sortindex
> -rw-rw-r-- 1 myubuntu myubuntu   36  4月 18 15:01
> 803c1c9a-56ba-43a3-847c-ae4a83dfcbe0.dict
> -rw-rw-r-- 1 myubuntu myubuntu   11  4月 18 15:01
> 803c1c9a-56ba-43a3-847c-ae4a83dfcbe0.dictmeta
> -rw-rw-r-- 1 myubuntu myubuntu   27  4月 18 15:01
> ba631ef5-b01c-4556-a286-550bbc98be80_52.sortindex
> -rw-rw-r-- 1 myubuntu myubuntu   52  4月 18 15:01
> ba631ef5-b01c-4556-a286-550bbc98be80.dict
> -rw-rw-r-- 1 myubuntu myubuntu   11  4月 18 15:01
> ba631ef5-b01c-4556-a286-550bbc98be80.dictmeta
> -rw-rw-r-- 1 myubuntu myubuntu   13  4月 18 15:01
> e42fa72f-43dc-4a58-9a5f-0e23d1653665_32.sortindex
> -rw-rw-r-- 1 myubuntu myubuntu   32  4月 18 15:01
> e42fa72f-43dc-4a58-9a5f-0e23d1653665.dict
> -rw-rw-r-- 1 myubuntu myubuntu   11  4月 18 15:01
> e42fa72f-43dc-4a58-9a5f-0e23d1653665.dictmeta
> -rw-rw-r-- 1 myubuntu myubuntu 1397  4月 18 15:01 schema
> -rw-rw-r-- 1 myubuntu myubuntu  276  4月 18 15:01 tablestatus
>
> What't the problem with this error?
>
> BTW, Why i can not use the 'file://' prefix, how to configure above paths
> on
> HDFS when I use HDFS?
>
> Thanks.
>
>
>
> --
> View this message in context: http://apache-carbondata-
> mailing-list-archive.1130556.n5.nabble.com/java-io-
> FileNotFoundException-file-data-carbon-data-default-
> carbon-table-Metadata-schema-write-tp11044p11234.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: java.io.FileNotFoundException: file:/data/carbon_data/default/carbon_table/Metadata/schema.write

xm_zzc
Hi:
  https://github.com/apache/incubator-carbondata/pull/813 this pr has resolved this problem. Now it works fine.
Loading...