Package org.apache.spark.sql.jdbc
Class PostgresDialect
Object
org.apache.spark.sql.jdbc.JdbcDialect
org.apache.spark.sql.jdbc.PostgresDialect
- All Implemented Interfaces:
- Serializable,- org.apache.spark.internal.Logging,- org.apache.spark.sql.catalyst.SQLConfHelper,- NoLegacyJDBCError,- scala.Equals,- scala.Product
public class PostgresDialect
extends JdbcDialect
implements org.apache.spark.sql.catalyst.SQLConfHelper, NoLegacyJDBCError, scala.Product, Serializable
- See Also:
- 
Nested Class SummaryNested classes/interfaces inherited from interface org.apache.spark.internal.Loggingorg.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionabstract static Rapply()voidbeforeFetch(Connection connection, scala.collection.immutable.Map<String, String> properties) Override connection specific properties to run before a select is made.booleanCheck if this dialect instance can handle a certain jdbc url.classifyException(Throwable e, String errorClass, scala.collection.immutable.Map<String, String> messageParameters, String description) Gets a dialect exception, classifies it and wraps it byAnalysisException.Converts an instance ofjava.sql.Dateto a customjava.sql.Datevalue.java.sql timestamps are measured with millisecond accuracy (from Long.MinValue milliseconds to Long.MaxValue milliseconds), while Spark timestamps are measured at microseconds accuracy.Convert java.sql.Timestamp to a LocalDateTime representing the same wall-clock time as the value stored in a remote database.Converts a LocalDateTime representing a TimestampNTZ type to an instance ofjava.sql.Timestamp.createIndex(String indexName, Identifier tableIdent, NamedReference[] columns, Map<NamedReference, Map<String, String>> columnsProperties, Map<String, String> properties) Build a create index SQL statement.dropIndex(String indexName, Identifier tableIdent) Build a drop index SQL statement.scala.Option<DataType>getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md) Get the custom datatype mapping for the given jdbc meta information.scala.Option<JdbcType>getJDBCType(DataType dt) Retrieve the jdbc / sql type for a given datatype.getTableSample(org.apache.spark.sql.execution.datasources.v2.TableSampleInfo sample) getTruncateQuery(String table, scala.Option<Object> cascade) The SQL query used to truncate a table.getUpdateColumnNullabilityQuery(String tableName, String columnName, boolean isNullable) getUpdateColumnTypeQuery(String tableName, String columnName, String newDataType) booleanindexExists(Connection conn, String indexName, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Checks whether an index existsscala.Option<Object>Return Some[true] iffTRUNCATE TABLEcauses cascading default.booleanisSupportedFunction(String funcName) Returns whether the database supports function.renameTable(Identifier oldTable, Identifier newTable) Rename an existing table.booleanReturns ture if dialect supports LIMIT clause.booleanReturns ture if dialect supports OFFSET clause.booleanstatic StringtoString()voidupdateExtraColumnMeta(Connection conn, ResultSetMetaData rsmd, int columnIdx, MetadataBuilder metadata) Get extra column metadata for the given column.Methods inherited from class org.apache.spark.sql.jdbc.JdbcDialectalterTable, classifyException, compileAggregate, compileExpression, compileValue, createConnectionFactory, createSchema, createTable, dropSchema, dropTable, functions, getAddColumnQuery, getDayTimeIntervalAsMicros, getDeleteColumnQuery, getFullyQualifiedQuotedTableName, getJdbcSQLQueryBuilder, getLimitClause, getOffsetClause, getRenameColumnQuery, getSchemaCommentQuery, getSchemaQuery, getTableCommentQuery, getTableExistsQuery, getTruncateQuery, getYearMonthIntervalAsMonths, insertIntoTable, listIndexes, listSchemas, quoteIdentifier, removeSchemaCommentQuery, renameTable, schemasExistsMethods inherited from class java.lang.Objectequals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface scala.EqualscanEqual, equalsMethods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface scala.ProductproductArity, productElement, productElementName, productElementNames, productIterator, productPrefixMethods inherited from interface org.apache.spark.sql.catalyst.SQLConfHelperconf, withSQLConf
- 
Constructor Details- 
PostgresDialectpublic PostgresDialect()
 
- 
- 
Method Details- 
applypublic abstract static R apply()
- 
toString
- 
canHandleDescription copied from class:JdbcDialectCheck if this dialect instance can handle a certain jdbc url.- Specified by:
- canHandlein class- JdbcDialect
- Parameters:
- url- the jdbc url.
- Returns:
- True if the dialect can be applied on the given jdbc url.
 
- 
isSupportedFunctionDescription copied from class:JdbcDialectReturns whether the database supports function.- Overrides:
- isSupportedFunctionin class- JdbcDialect
- Parameters:
- funcName- Upper-cased function name
- Returns:
- True if the database supports function.
 
- 
getCatalystTypepublic scala.Option<DataType> getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md) Description copied from class:JdbcDialectGet the custom datatype mapping for the given jdbc meta information.Guidelines for mapping database defined timestamps to Spark SQL timestamps: - 
     TIMESTAMP WITHOUT TIME ZONE if preferTimestampNTZ ->
     TimestampNTZType
- 
     TIMESTAMP WITHOUT TIME ZONE if !preferTimestampNTZ ->
     TimestampType(LTZ)
- TIMESTAMP WITH TIME ZONE -> TimestampType(LTZ)
- TIMESTAMP WITH LOCAL TIME ZONE -> TimestampType(LTZ)
- 
     If the TIMESTAMP cannot be distinguished by sqlTypeandtypeName, preferTimestampNTZ is respected for now, but we may need to add another option in the future if necessary.
 - Overrides:
- getCatalystTypein class- JdbcDialect
- Parameters:
- sqlType- Refers to- Typesconstants, or other constants defined by the target database, e.g.- -101is Oracle's TIMESTAMP WITH TIME ZONE type. This value is returned by- ResultSetMetaData.getColumnType(int).
- typeName- The column type name used by the database (e.g. "BIGINT UNSIGNED"). This is sometimes used to determine the target data type when- sqlTypeis not sufficient if multiple database types are conflated into a single id. This value is returned by- ResultSetMetaData.getColumnTypeName(int).
- size- The size of the type, e.g. the maximum precision for numeric types, length for character string, etc. This value is returned by- ResultSetMetaData.getPrecision(int).
- md- Result metadata associated with this type. This contains additional information from- ResultSetMetaDataor user specified options.- 
               isTimestampNTZ: Whether read a TIMESTAMP WITHOUT TIME ZONE value asTimestampNTZTypeor not. This is configured byJDBCOptions.preferTimestampNTZ.
- 
               scale: The length of fractional partResultSetMetaData.getScale(int)
 
- 
               
- Returns:
- An option the actual DataType (subclasses of DataType) or None if the default type mapping should be used.
 
- 
     TIMESTAMP WITHOUT TIME ZONE if preferTimestampNTZ ->
     
- 
convertJavaTimestampToTimestampNTZDescription copied from class:JdbcDialectConvert java.sql.Timestamp to a LocalDateTime representing the same wall-clock time as the value stored in a remote database. JDBC dialects should override this function to provide implementations that suit their JDBC drivers.- Overrides:
- convertJavaTimestampToTimestampNTZin class- JdbcDialect
- Parameters:
- t- Timestamp returned from JDBC driver getTimestamp method.
- Returns:
- A LocalDateTime representing the same wall clock time as the timestamp in database.
 
- 
convertTimestampNTZToJavaTimestampDescription copied from class:JdbcDialectConverts a LocalDateTime representing a TimestampNTZ type to an instance ofjava.sql.Timestamp.- Overrides:
- convertTimestampNTZToJavaTimestampin class- JdbcDialect
- Parameters:
- ldt- representing a TimestampNTZType.
- Returns:
- A Java Timestamp representing this LocalDateTime.
 
- 
getJDBCTypeDescription copied from class:JdbcDialectRetrieve the jdbc / sql type for a given datatype.- Overrides:
- getJDBCTypein class- JdbcDialect
- Parameters:
- dt- The datatype (e.g.- StringType)
- Returns:
- The new JdbcType if there is an override for this DataType
 
- 
isCascadingTruncateTableDescription copied from class:JdbcDialectReturn Some[true] iffTRUNCATE TABLEcauses cascading default. Some[true] : TRUNCATE TABLE causes cascading. Some[false] : TRUNCATE TABLE does not cause cascading. None: The behavior of TRUNCATE TABLE is unknown (default).- Overrides:
- isCascadingTruncateTablein class- JdbcDialect
- Returns:
- (undocumented)
 
- 
getTruncateQueryThe SQL query used to truncate a table. For Postgres, the default behaviour is to also truncate any descendant tables. As this is a (possibly unwanted) side-effect, the Postgres dialect adds 'ONLY' to truncate only the table in question- Overrides:
- getTruncateQueryin class- JdbcDialect
- Parameters:
- table- The table to truncate
- cascade- Whether or not to cascade the truncation. Default value is the value of isCascadingTruncateTable(). Cascading a truncation will truncate tables with a foreign key relationship to the target table. However, it will not truncate tables with an inheritance relationship to the target table, as the truncate query always includes "ONLY" to prevent this behaviour.
- Returns:
- The SQL query to use for truncating a table
 
- 
beforeFetchpublic void beforeFetch(Connection connection, scala.collection.immutable.Map<String, String> properties) Description copied from class:JdbcDialectOverride connection specific properties to run before a select is made. This is in place to allow dialects that need special treatment to optimize behavior.- Overrides:
- beforeFetchin class- JdbcDialect
- Parameters:
- connection- The connection object
- properties- The connection properties. This is passed through from the relation.
 
- 
getUpdateColumnTypeQuery- Overrides:
- getUpdateColumnTypeQueryin class- JdbcDialect
 
- 
getUpdateColumnNullabilityQuerypublic String getUpdateColumnNullabilityQuery(String tableName, String columnName, boolean isNullable) - Overrides:
- getUpdateColumnNullabilityQueryin class- JdbcDialect
 
- 
createIndexpublic String createIndex(String indexName, Identifier tableIdent, NamedReference[] columns, Map<NamedReference, Map<String, String>> columnsProperties, Map<String, String> properties) Description copied from class:JdbcDialectBuild a create index SQL statement.- Overrides:
- createIndexin class- JdbcDialect
- Parameters:
- indexName- the name of the index to be created
- tableIdent- the table on which index to be created
- columns- the columns on which index to be created
- columnsProperties- the properties of the columns on which index to be created
- properties- the properties of the index to be created
- Returns:
- the SQL statement to use for creating the index.
 
- 
indexExistspublic boolean indexExists(Connection conn, String indexName, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Description copied from class:JdbcDialectChecks whether an index exists- Overrides:
- indexExistsin class- JdbcDialect
- Parameters:
- conn- (undocumented)
- indexName- the name of the index
- tableIdent- the table on which index to be checked
- options- JDBCOptions of the table
- Returns:
- true if the index with indexNameexists in the table withtableName, false otherwise
 
- 
dropIndexDescription copied from class:JdbcDialectBuild a drop index SQL statement.- Overrides:
- dropIndexin class- JdbcDialect
- Parameters:
- indexName- the name of the index to be dropped.
- tableIdent- the table on which index to be dropped.
- Returns:
- the SQL statement to use for dropping the index.
 
- 
classifyExceptionpublic AnalysisException classifyException(Throwable e, String errorClass, scala.collection.immutable.Map<String, String> messageParameters, String description) Description copied from class:JdbcDialectGets a dialect exception, classifies it and wraps it byAnalysisException.- Specified by:
- classifyExceptionin interface- NoLegacyJDBCError
- Overrides:
- classifyExceptionin class- JdbcDialect
- Parameters:
- e- The dialect specific exception.
- errorClass- The error class assigned in the case of an unclassified- e
- messageParameters- The message parameters of- errorClass
- description- The error description
- Returns:
- AnalysisExceptionor its sub-class.
 
- 
supportsLimitpublic boolean supportsLimit()Description copied from class:JdbcDialectReturns ture if dialect supports LIMIT clause.Note: Some build-in dialect supports LIMIT clause with some trick, please see: OracleDialect.OracleSQLQueryBuilderandMsSqlServerDialect.MsSqlServerSQLQueryBuilder.- Overrides:
- supportsLimitin class- JdbcDialect
- Returns:
- (undocumented)
 
- 
supportsOffsetpublic boolean supportsOffset()Description copied from class:JdbcDialectReturns ture if dialect supports OFFSET clause.Note: Some build-in dialect supports OFFSET clause with some trick, please see: OracleDialect.OracleSQLQueryBuilderandMySQLDialect.MySQLSQLQueryBuilder.- Overrides:
- supportsOffsetin class- JdbcDialect
- Returns:
- (undocumented)
 
- 
supportsTableSamplepublic boolean supportsTableSample()- Overrides:
- supportsTableSamplein class- JdbcDialect
 
- 
getTableSample- Overrides:
- getTableSamplein class- JdbcDialect
 
- 
renameTableDescription copied from class:JdbcDialectRename an existing table.- Overrides:
- renameTablein class- JdbcDialect
- Parameters:
- oldTable- The existing table.
- newTable- New name of the table.
- Returns:
- The SQL statement to use for renaming the table.
 
- 
convertJavaTimestampToTimestampjava.sql timestamps are measured with millisecond accuracy (from Long.MinValue milliseconds to Long.MaxValue milliseconds), while Spark timestamps are measured at microseconds accuracy. For the "infinity values" in PostgreSQL (represented by big constants), we need clamp them to avoid overflow. If it is not one of the infinity values, fall back to default behavior.- Overrides:
- convertJavaTimestampToTimestampin class- JdbcDialect
- Parameters:
- t- (undocumented)
- Returns:
- (undocumented)
 
- 
convertJavaDateToDateDescription copied from class:JdbcDialectConverts an instance ofjava.sql.Dateto a customjava.sql.Datevalue.- Overrides:
- convertJavaDateToDatein class- JdbcDialect
- Parameters:
- d- the date value returned from JDBC ResultSet getDate method.
- Returns:
- the date value after conversion
 
- 
updateExtraColumnMetapublic void updateExtraColumnMeta(Connection conn, ResultSetMetaData rsmd, int columnIdx, MetadataBuilder metadata) Description copied from class:JdbcDialectGet extra column metadata for the given column.- Overrides:
- updateExtraColumnMetain class- JdbcDialect
- Parameters:
- conn- The connection currently connection being used.
- rsmd- The metadata of the current result set.
- columnIdx- The index of the column.
- metadata- The metadata builder to store the extra column information.
 
 
-