Skip to main content
Skip to main content

Paimon

Paimon

SinceVersion dev

Instructions for use​

  1. When data in hdfs,need to put core-site.xml, hdfs-site.xml and hive-site.xml in the conf directory of FE and BE. First read the hadoop configuration file in the conf directory, and then read the related to the environment variable HADOOP_CONF_DIR configuration file.
  2. The currently adapted version of the payment is 0.6.0

Create Catalog​

Paimon Catalog Currently supports two types of Metastore creation catalogs:

  • filesystem(default),Store both metadata and data in the file system.
  • hive metastore,It also stores metadata in Hive metastore. Users can access these tables directly from Hive.

Creating a Catalog Based on FileSystem​

For versions 2.0.1 and earlier, please use the following Create Catalog based on Hive Metastore.

HDFS​

CREATE CATALOG `paimon_hdfs` PROPERTIES (
"type" = "paimon",
"warehouse" = "hdfs://HDFS8000871/user/paimon",
"dfs.nameservices" = "HDFS8000871",
"dfs.ha.namenodes.HDFS8000871" = "nn1,nn2",
"dfs.namenode.rpc-address.HDFS8000871.nn1" = "172.21.0.1:4007",
"dfs.namenode.rpc-address.HDFS8000871.nn2" = "172.21.0.2:4007",
"dfs.client.failover.proxy.provider.HDFS8000871" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"hadoop.username" = "hadoop"
);

MINIO​

Note that.

user need download paimon-s3-0.6.0-incubating.jar

Place it in directory ${DORIS_HOME}/be/lib/java_extensions/preload-extensions and restart be

Starting from version 2.0.2, this file can be placed in BE's custom_lib/ directory (if it does not exist, just create it manually) to prevent the file from being lost due to the replacement of the lib directory when upgrading the cluster.

CREATE CATALOG `paimon_s3` PROPERTIES (
"type" = "paimon",
"warehouse" = "s3://bucket_name/paimons3",
"s3.endpoint" = "http://<ip>:<port>",
"s3.access_key" = "ak",
"s3.secret_key" = "sk"
);

OBS​

Note that.

user need download paimon-s3-0.6.0-incubating.jar

Place it in directory ${DORIS_HOME}/be/lib/java_extensions/preload-extensions and restart be

Starting from version 2.0.2, this file can be placed in BE's custom_lib/ directory (if it does not exist, just create it manually) to prevent the file from being lost due to the replacement of the lib directory when upgrading the cluster.

CREATE CATALOG `paimon_obs` PROPERTIES (
"type" = "paimon",
"warehouse" = "obs://bucket_name/paimon",
"obs.endpoint"="obs.cn-north-4.myhuaweicloud.com",
"obs.access_key"="ak",
"obs.secret_key"="sk"
);

COS​

CREATE CATALOG `paimon_cos` PROPERTIES (
"type" = "paimon",
"warehouse" = "cosn://paimon-1308700295/paimoncos",
"cos.endpoint" = "cos.ap-beijing.myqcloud.com",
"cos.access_key" = "ak",
"cos.secret_key" = "sk"
);

OSS​

CREATE CATALOG `paimon_oss` PROPERTIES (
"type" = "paimon",
"warehouse" = "oss://paimon-zd/paimonoss",
"oss.endpoint" = "oss-cn-beijing.aliyuncs.com",
"oss.access_key" = "ak",
"oss.secret_key" = "sk"
);

Creating a Catalog Based on Hive Metastore​

CREATE CATALOG `paimon_hms` PROPERTIES (
"type" = "paimon",
"paimon.catalog.type" = "hms",
"warehouse" = "hdfs://HDFS8000871/user/zhangdong/paimon2",
"hive.metastore.uris" = "thrift://172.21.0.44:7004",
"dfs.nameservices" = "HDFS8000871",
"dfs.ha.namenodes.HDFS8000871" = "nn1,nn2",
"dfs.namenode.rpc-address.HDFS8000871.nn1" = "172.21.0.1:4007",
"dfs.namenode.rpc-address.HDFS8000871.nn2" = "172.21.0.2:4007",
"dfs.client.failover.proxy.provider.HDFS8000871" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"hadoop.username" = "hadoop"
);

Column Type Mapping​

Paimon Data TypeDoris Data TypeComment
BooleanTypeBoolean
TinyIntTypeTinyInt
SmallIntTypeSmallInt
IntTypeInt
FloatTypeFloat
BigIntTypeBigInt
DoubleTypeDouble
VarCharTypeVarChar
CharTypeChar
DecimalType(precision, scale)Decimal(precision, scale)
TimestampType,LocalZonedTimestampTypeDateTime
DateTypeDate
MapTypeMapSupport Map nesting
ArrayTypeArraySupport Array nesting
VarBinaryType, BinaryTypeBinary