Class FeatureHasher
- All Implemented Interfaces:
- Serializable,- org.apache.spark.internal.Logging,- Params,- HasInputCols,- HasNumFeatures,- HasOutputCol,- DefaultParamsWritable,- Identifiable,- MLWritable
 The FeatureHasher transformer operates on multiple columns. Each column may contain either
 numeric or categorical features. Behavior and handling of column data types is as follows:
  -Numeric columns: For numeric features, the hash value of the column name is used to map the
                    feature value to its index in the feature vector. By default, numeric features
                    are not treated as categorical (even when they are integers). To treat them
                    as categorical, specify the relevant columns in categoricalCols.
  -String columns: For categorical features, the hash value of the string "column_name=value"
                   is used to map to the vector index, with an indicator value of 1.0.
                   Thus, categorical features are "one-hot" encoded
                   (similarly to using OneHotEncoder with dropLast=false).
  -Boolean columns: Boolean values are treated in the same way as string columns. That is,
                    boolean features are represented as "column_name=true" or "column_name=false",
                    with an indicator value of 1.0.
 
Null (missing) values are ignored (implicitly zero in the resulting feature vector).
 The hash function used here is also the MurmurHash 3 used in HashingTF. Since a simple modulo
 on the hashed value is used to determine the vector index, it is advisable to use a power of two
 as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector
 indices.
 
   val df = Seq(
    (2.0, true, "1", "foo"),
    (3.0, false, "2", "bar")
   ).toDF("real", "bool", "stringNum", "string")
   val hasher = new FeatureHasher()
    .setInputCols("real", "bool", "stringNum", "string")
    .setOutputCol("features")
   hasher.transform(df).show(false)
   +----+-----+---------+------+------------------------------------------------------+
   |real|bool |stringNum|string|features                                              |
   +----+-----+---------+------+------------------------------------------------------+
   |2.0 |true |1        |foo   |(262144,[51871,63643,174475,253195],[1.0,1.0,2.0,1.0])|
   |3.0 |false|2        |bar   |(262144,[6031,80619,140467,174475],[1.0,1.0,1.0,3.0]) |
   +----+-----+---------+------+------------------------------------------------------+
 - See Also:
- 
Nested Class SummaryNested classes/interfaces inherited from interface org.apache.spark.internal.Loggingorg.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionNumeric columns to treat as categorical features.Creates a copy of this instance with the same UID and some extra params.String[]final StringArrayParamParam for input column names.static FeatureHasherfinal IntParamParam for Number of features.Param for output column name.static MLReader<T>read()setCategoricalCols(String[] value) setInputCols(String[] value) setInputCols(scala.collection.immutable.Seq<String> values) setNumFeatures(int value) setOutputCol(String value) toString()Transforms the input dataset.transformSchema(StructType schema) Check transform validity and derive the output schema from the input schema.uid()An immutable unique ID for the object and its derivatives.Methods inherited from class org.apache.spark.ml.Transformertransform, transform, transformMethods inherited from class org.apache.spark.ml.PipelineStageparamsMethods inherited from class java.lang.Objectequals, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.spark.ml.util.DefaultParamsWritablewriteMethods inherited from interface org.apache.spark.ml.param.shared.HasInputColsgetInputColsMethods inherited from interface org.apache.spark.ml.param.shared.HasNumFeaturesgetNumFeaturesMethods inherited from interface org.apache.spark.ml.param.shared.HasOutputColgetOutputColMethods inherited from interface org.apache.spark.internal.LogginginitializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface org.apache.spark.ml.util.MLWritablesaveMethods inherited from interface org.apache.spark.ml.param.Paramsclear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, onParamChange, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
- 
Constructor Details- 
FeatureHasher
- 
FeatureHasherpublic FeatureHasher()
 
- 
- 
Method Details- 
load
- 
read
- 
numFeaturesDescription copied from interface:HasNumFeaturesParam for Number of features. Should be greater than 0.- Specified by:
- numFeaturesin interface- HasNumFeatures
- Returns:
- (undocumented)
 
- 
outputColDescription copied from interface:HasOutputColParam for output column name.- Specified by:
- outputColin interface- HasOutputCol
- Returns:
- (undocumented)
 
- 
inputColsDescription copied from interface:HasInputColsParam for input column names.- Specified by:
- inputColsin interface- HasInputCols
- Returns:
- (undocumented)
 
- 
uidDescription copied from interface:IdentifiableAn immutable unique ID for the object and its derivatives.- Specified by:
- uidin interface- Identifiable
- Returns:
- (undocumented)
 
- 
categoricalColsNumeric columns to treat as categorical features. By default only string and boolean columns are treated as categorical, so this param can be used to explicitly specify the numerical columns to treat as categorical. Note, the relevant columns should also be set ininputCols, categorical columns not set ininputColswill be listed in a warning.- Returns:
- (undocumented)
 
- 
setNumFeatures
- 
setInputCols
- 
setInputCols
- 
setOutputCol
- 
getCategoricalCols
- 
setCategoricalCols
- 
transformDescription copied from class:TransformerTransforms the input dataset.- Specified by:
- transformin class- Transformer
- Parameters:
- dataset- (undocumented)
- Returns:
- (undocumented)
 
- 
copyDescription copied from interface:ParamsCreates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. SeedefaultCopy().- Specified by:
- copyin interface- Params
- Specified by:
- copyin class- Transformer
- Parameters:
- extra- (undocumented)
- Returns:
- (undocumented)
 
- 
transformSchemaDescription copied from class:PipelineStageCheck transform validity and derive the output schema from the input schema.We check validity for interactions between parameters during transformSchemaand raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate().Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks. - Specified by:
- transformSchemain class- PipelineStage
- Parameters:
- schema- (undocumented)
- Returns:
- (undocumented)
 
- 
toString- Specified by:
- toStringin interface- Identifiable
- Overrides:
- toStringin class- Object
 
 
-