| __attribute__ | |
| PLearn::_plearn_nan_type | |
| PLearn::AbsVariable | |
| PLearn::AdaBoost | |
| PLearn::AdaptGradientOptimizer | |
| PLearn::AddCostToLearner | |
| PLearn::AdditiveNormalizationKernel | |
| PLearn::AffineTransformVariable | Affine transformation of a vector variable |
| PLearn::AffineTransformWeightPenalty | Weight decay terms for affine transforms |
| PLearn::ArgmaxVariable | |
| PLearn::ArgminOfVariable | |
| PLearn::ArgminVariable | |
| PLearn::Array< T > | |
| PLearn::Array2ArrayMap< T > | |
| PLearn::ArrayAllocator< T, SizeBits > | |
| PLearn::ArrayAllocatorIndex< IndexBase, SizeBits > | This type represents an index into the allocated memory, as a bit-field parameterized by the template argument SizeBits |
| PLearn::ArrayAllocatorOptions | |
| PLearn::ArrayAllocatorTrivial< T, SizeBits > | This allocator solely performs allocation |
| PLearn::AsciiVMatrix | |
| PLearn::AutoRunCommand | |
| PLearn::AutoSDBVMatrix | A VMatrix view of a SimpleDB: columns whose type is string are removed from the view, all others are converted to real (characters to their ascii code, and dates to the float date format: 990324) |
| PLearn::AutoVMatrix | This class is a simple wrapper to an underlying VMatrix of another type All it does is forward the method calls |
| PLearn::BatchVMatrix | VMat class that replicates small parts of a matrix (mini-batches), so that each mini-batch appears twice (consecutively) |
| PLearn::BinaryClassificationLossVariable | For one-dimensional output: class is 0 if output < 0.5, and 1 if >= 0.5 |
| PLearn::BinarySampleVariable | |
| PLearn::BinaryVariable | |
| PLearn::Binner | |
| PLearn::BootstrapSplitter | |
| PLearn::BootstrapVMatrix | |
| PLearn::BottomNI< T > | |
| PLearn::ByteMemoryVMatrix | |
| PLearn::CallbackMeasurer | |
| PLearn::CenteredVMatrix | |
| PLearn::ClassDistanceProportionCostFunction | |
| PLearn::ClassErrorCostFunction | |
| PLearn::ClassificationLossVariable | Indicator(classnum==argmax(netout)) |
| PLearn::ClassifierFromDensity | |
| PLearn::ClassMarginCostFunction | |
| PLearn::ColumnIndexVariable | |
| PLearn::ColumnSumVariable | Result is a single row that contains the sum of each column of the input |
| PLearn::CompactVMatrix | |
| PLearn::CompactVMatrixGaussianKernel | |
| PLearn::CompactVMatrixPolynomialKernel | |
| PLearn::ComplementedProbSparseMatrix | |
| PLearn::CompressedVMatrix | |
| PLearn::ConcatColumnsRandomVariable | Concatenate the columns of the matrix arguments, just like the hconcat function (PLearn.h) on Vars |
| PLearn::ConcatColumnsVariable | Concatenation of the columns of several variables |
| PLearn::ConcatColumnsVMatrix | |
| PLearn::ConcatOfVariable | |
| PLearn::ConcatRowsSubVMatrix | |
| PLearn::ConcatRowsVariable | Concatenation of the rows of several variables |
| PLearn::ConcatRowsVMatrix | |
| PLearn::ConditionalCDFSmoother | |
| PLearn::ConditionalDensityNet | |
| PLearn::ConditionalDistribution | |
| PLearn::ConditionalExpression | |
| PLearn::ConditionalGaussianDistribution | |
| PLearn::ConditionalStatsCollector | |
| PLearn::ConjGradientOptimizer | |
| PLearn::ConstantRegressor | |
| PLearn::ConvexBasisKernel | Returns prod_i log(1+exp(c*(x1[i]-x2[i]))) NOTE: IT IS NOT SYMMETRIC! |
| PLearn::ConvolveVariable | A convolve var; equals convolve(input, mask) |
| PLearn::CountEventsSemaphore | |
| PLearn::CrossEntropyVariable | Cost = - sum_i {target_i * log(output_i) + (1-target_i) * log(1-output_i)} |
| PLearn::CrossReferenceVMatrix | |
| PLearn::CumVMatrix | |
| PLearn::CutAboveThresholdVariable | |
| PLearn::CutBelowThresholdVariable | |
| PLearn::DatedJoinVMatrix | |
| PLearn::DatedVMatrix | |
| PLearn::DBSplitter | |
| PLearn::DeterminantVariable | The argument must be a square matrix Var and the result is its determinant |
| PLearn::DiagonalizedFactorsProductVariable | |
| PLearn::DiagonalNormalRandomVariable | |
| PLearn::DiagonalNormalSampleVariable | |
| PLearn::Dictionary | |
| PLearn::DifferenceKernel | Returns sum_i[x1_i-x2_i] |
| PLearn::DilogarithmVariable | This is the primitive of a sigmoid: log(1+exp(x)) |
| PLearn::DirectNegativeCostFunction | |
| PLearn::DiskVMatrix | A VMatrix whose (compressed) data resides in a directory and can span several files |
| PLearn::DistanceKernel | This class implements an Ln distance (defaults to L2 i.e. euclidean distance) |
| PLearn::Distribution | |
| PLearn::DivisiveNormalizationKernel | |
| PLearn::DivVariable | Divides 2 matrix vars of same size elementwise |
| PLearn::DotProductKernel | Returns <x1,x2> |
| PLearn::DotProductVariable | Dot product between 2 vectors (or possibly 2 matrices, which are then simply seen as vectors) |
| PLearn::DoubleAccessSparseMatrix< T > | |
| PLearn::DuplicateColumnVariable | |
| PLearn::DuplicateRowVariable | |
| PLearn::DuplicateScalarVariable | |
| PLearn::ElementAtPositionVariable | |
| PLearn::ElementWiseDivisionRandomVariable | |
| PLearn::EmbeddedLearner | |
| PLearn::EmbeddedSequentialLearner | |
| PLearn::EmpiricalDistribution | |
| PLearn::EntropyContrast | |
| PLearn::EqualConstantVariable | A scalar var; equal 1 if input1==input2, 0 otherwise |
| PLearn::EqualScalarVariable | A scalar var; equal 1 if input1==input2, 0 otherwise |
| PLearn::EqualVariable | A scalar var; equal 1 if input1==input2, 0 otherwise |
| PLearn::ErfVariable | |
| PLearn::Experiment | |
| PLearn::ExplicitSplitter | |
| PLearn::ExpMeanStatsIterator | |
| PLearn::ExpRandomVariable | |
| PLearn::ExpVariable | |
| PLearn::ExtendedRandomVariable | |
| PLearn::ExtendedVariable | |
| PLearn::ExtendedVMatrix | |
| PLearn::Field | |
| PLearn::FieldConvertCommand | |
| PLearn::FieldPtr | |
| PLearn::FieldRowRef | |
| PLearn::FieldStat | |
| PLearn::FieldValue | |
| PLearn::FieldValue::DateVal_t | |
| PLearn::FilePStreamBuf | |
| PLearn::FilesIntStream | |
| PLearn::FileVMatrix | A VMatrix that exists in a .pmat file (native plearn matrix format, same as for Mat) |
| PLearn::FilteredVMatrix | |
| PLearn::FilterSplitter | |
| PLearn::FinancePreprocVMatrix | |
| PLearn::ForwardVMatrix | This class is a simple wrapper to an underlying VMatrix of another type All it does is forward the method calls |
| PLearn::FractionSplitter | |
| PLearn::Func | |
| PLearn::Function | |
| PLearn::FunctionalRandomVariable | |
| PLearn::GaussianContinuum | |
| PLearn::GaussianDensityKernel | |
| PLearn::GaussianDistribution | |
| PLearn::GaussianKernel | Returns exp(-norm_2(x1-x2)^2/sigma^2) |
| PLearn::GaussianProcessRegressor | |
| PLearn::GaussMix | |
| PLearn::GeneralizedDistanceRBFKernel | Returns exp(-phi*(sum_i[abs(x1_i^a - x2_i^a)^b])^c) |
| PLearn::GeneralizedOneHotVMatrix | This VMat is a generalization of OneHotVMatrix where all columns (given by the Vec index) are mapped, instead of just the last one |
| PLearn::GenerateDecisionPlot | |
| PLearn::GeodesicDistanceKernel | |
| PLearn::GetInputVMatrix | |
| PLearn::GhostScript | |
| PLearn::Gnuplot | |
| PLearn::GradientOptimizer | |
| PLearn::GramVMatrix | |
| PLearn::Grapher | |
| PLearn::GraphicalBiText | |
| PLearn::HardSlopeVariable | |
| PLearn::Hash< KeyType, DataType > | |
| PLearn::HashKeyDataPair< KeyType, DataType > | |
| PLearn::HCoordinateDescent | |
| PLearn::HelpCommand | |
| PLearn::HistogramDistribution | |
| PLearn::HSetVal | |
| PLearn::HSV | |
| PLearn::HTryAll | |
| PLearn::HTryCombinations | |
| PLearn::HyperOptimizer | |
| PLearn::IfThenElseVariable | |
| PLearn::IndexAtPositionVariable | |
| PLearn::IndexedVMatrix | VMat class that sees a matrix as a collection of triplets (row, column, value) Thus it is a N x 3 matrix, with N = the number of elements in the original matrix |
| PLearn::InMemoryIntStream | |
| PLearn::InterleaveVMatrix | |
| PLearn::InterValuesVariable | If values = [x1,x2,...,x10], the resulting variable is [(x1+x2)/2,(x2+x3)/2, .. |
| PLearn::IntPair | Example of class that can be used as key |
| PLearn::IntStream | |
| PLearn::IntStreamVMatrix | |
| PLearn::IntVecFile | |
| PLearn::InvertElementsVariable | |
| iostream | |
| PLearn::IPopen | |
| PLearn::IPServer | |
| PLearn::IsAboveThresholdVariable | Does elementwise newx_i = (x_i>=threshold ?truevalue :falsevalue); |
| PLearn::IsLargerVariable | ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ |
| PLearn::IsMissingVariable | A scalar var; equal 1 if input1!=c, 0 otherwise |
| PLearn::Isomap | |
| PLearn::IsomapTangentLearner | |
| PLearn::IsSmallerVariable | |
| PLearn::JoinFieldStat | |
| PLearn::JointRandomVariable | |
| PLearn::JoinVMatrix | |
| PLearn::JulianDateCommand | |
| PLearn::JulianizeVMatrix | |
| PLearn::Ker | |
| PLearn::Kernel | |
| PLearn::KernelPCA | |
| PLearn::KernelProjection | |
| PLearn::KernelVMatrix | |
| PLearn::KFoldSplitter | |
| PLearn::KNNVMatrix | |
| PLearn::KolmogorovSmirnovCommand | |
| PLearn::KPCATangentLearner | |
| PLearn::LaplacianKernel | Returns exp(-phi*(sum_i[abs(x1_i - x2_i)])) |
| PLearn::Learner | |
| PLearn::LearnerCommand | |
| PLearn::LearnerProcessedVMatrix | |
| PLearn::LeftPseudoInverseVariable | |
| PLearn::LiftBinaryCostFunction | |
| PLearn::LiftOutputVariable | |
| PLearn::LiftStatsCollector | |
| PLearn::LiftStatsIterator | |
| PLearn::LimitedGaussianSmoother | |
| PLearn::LinearRegressor | |
| PLearn::LLE | |
| PLearn::LLEKernel | |
| PLearn::LocallyWeightedDistribution | |
| PLearn::LocalNeighborsDifferencesVMatrix | |
| PLearn::LogAddVariable | Output = log(exp(input1)+exp(input2)) but it is computed in such a way as to preserve precision |
| PLearn::LogOfGaussianDensityKernel | |
| PLearn::LogRandomVariable | |
| PLearn::LogSoftmaxVariable | |
| PLearn::LogSumVariable | |
| PLearn::LogVariable | |
| PLearn::ManifoldParzen2 | |
| PLearn::ManualBinner | |
| PLearn::MarginPerceptronCostVariable | |
| PLearn::MatlabInterface | |
| PLearn::MatrixAffineTransformFeedbackVariable | Affine transformation of a MATRIX variable |
| PLearn::MatrixAffineTransformVariable | Affine transformation of a MATRIX variable |
| PLearn::MatrixElementsVariable | |
| PLearn::MatrixInverseVariable | |
| PLearn::MatrixOneHotSquaredLoss | |
| PLearn::MatrixSoftmaxLossVariable | |
| PLearn::MatrixSoftmaxVariable | |
| PLearn::MatrixSumOfVariable | |
| PLearn::MatRowVariable | Variable that is the row of matrix mat indexed by variable input |
| PLearn::Max2Variable | |
| PLearn::MaxStatsIterator | |
| PLearn::MaxVariable | |
| PLearn::MeanStatsIterator | |
| PLearn::Measurer | |
| PLearn::MemoryVMatrix | |
| PLearn::MiniBatchClassificationLossVariable | |
| PLearn::MinStatsIterator | |
| PLearn::MinusColumnVariable | |
| PLearn::MinusRandomVariable | |
| PLearn::MinusRowVariable | |
| PLearn::MinusScalarVariable | |
| PLearn::MinusTransposedColumnVariable | |
| PLearn::MinusVariable | |
| PLearn::MinVariable | |
| PLearn::MixtureRandomVariable | |
| PLearn::MovingAverage | This SequentialLearner only takes the n previous target to predict the next one |
| PLearn::MovingAverageVMatrix | |
| PLearn::MRUFileList | |
| PLearn::MulticlassErrorCostFunction | |
| PLearn::MulticlassLossVariable | Cost = sum_i {cost_i}, with cost_i = 1 if (target_i == 1 && output_i < 1/2) cost_i = 1 if (target_i == 0 && output_i > 1/2) cost_i = 0 otherwise |
| PLearn::MultiInstanceNNet | |
| PLearn::MultiInstanceVMatrix | |
| PLearn::MultiMap< A, B > | |
| PLearn::MultinomialRandomVariable | |
| PLearn::MultinomialSampleVariable | |
| PLearn::NaryVariable | |
| PLearn::NearestNeighborPredictionCost | |
| PLearn::NegateElementsVariable | |
| PLearn::NegCrossEntropySigmoidVariable | |
| PLearn::NegKernel | |
| PLearn::NegLogProbCostFunction | |
| PLearn::NegOutputCostFunction | This simply returns -output[0] (target should usually have a length of 0) This is used for density estimators whose use(x) method typically computes log(p(x)) |
| PLearn::NegRandomVariable | |
| PLearn::NeighborhoodSmoothnessNNet | |
| PLearn::NeuralNet | |
| PLearn::NistDB | |
| PLearn::NllSemisphericalGaussianVariable | This class implements the negative log-likelihood cost of a Markov chain that uses semispherical gaussian transition probabilities |
| PLearn::NNet | |
| PLearn::Node | |
| PLearn::NonRandomVariable | |
| PLearn::NormalizedDotProductKernel | |
| PLearn::NullProgressBarPlugin | Simpler plugin that doesn't display a progress bar at all |
| PLearn::NumToStringMapping | |
| PLearn::Object | The Object class |
| PLearn::ObjectGenerator | |
| PLearn::OneHotSquaredLoss | Computes sum(square_i(netout[i]-(i==classnum ?hotval :coldval)) This is used typically in a classification setting where netout is a Var of network outputs, and classnum is the target class number |
| PLearn::OneHotVariable | Represents a vector of a given lenth, that has value 1 at the index given by another variable and 0 everywhere else |
| PLearn::OneHotVMatrix | |
| PLearn::Optimizer | |
| PLearn::Option< T, Enclosing > | Template class for option definitions |
| PLearn::OptionBase | Base class for option definitions |
| PLearn::PairsVMatrix | |
| PLearn::PCA | |
| PLearn::PConditionalDistribution | |
| PLearn::PDate | |
| PLearn::PDateTime | |
| PLearn::PDistribution | |
| PLearn::PDistributionVariable | |
| PLearn::PIFStream | |
| PLearn::PIStringStream | |
| PLearn::pl_fdstream | |
| PLearn::pl_fdstreambuf | Pl_fdstreambuf: stream buffer that acts on a POSIX file descriptor |
| PLearn::pl_nullstreambuf | |
| PLearn::pl_stream_clear_flags | |
| PLearn::pl_stream_initiate | |
| PLearn::pl_stream_raw | |
| PLearn::pl_streambuf | |
| PLearn::pl_streammarker | |
| PLearn::PLearnCommand | This is the base class for all PLearn commands (those that can be issued in the plearn program) |
| PLearn::PLearnCommandRegistry | |
| PLearn::PLearner | |
| PLearn::PLearnerOutputVMatrix | |
| PLearn::PLearnError | |
| PLearn::PLearnInit | |
| PLearn::PLMathInitializer | |
| PLearn::PLMPI | ** PLMPI is just a "namespace holder" (because we're not actually using namespaces) for a few MPI related variables. All members are static ** |
| PLearn::PLogPVariable | Returns the elementwise x*log(x) in a (hopefully!) numerically stable way This can be used to compute the Entropy for instance |
| PLearn::PLS | |
| PLearn::PlusColumnVariable | Adds a single-column var to each column of a matrix var |
| PLearn::PlusConstantVariable | Adds a scalar constant to a matrix var |
| PLearn::PlusRandomVariable | |
| PLearn::PlusRowVariable | Adds a single-row var to each row of a matrix var |
| PLearn::PlusScalarVariable | Adds a scalar var to a matrix var |
| PLearn::PlusVariable | Adds 2 matrix vars of same size |
| PLearn::POFStream | |
| PLearn::PolynomialKernel | Returns (beta*dot(x1,x2)+1)^n |
| PLearn::Popen | |
| PLearn::PowDistanceKernel | |
| PLearn::PowVariable | Elementwise pow (returns 0 wherever input is negative) |
| PLearn::PowVariableVariable | |
| PLearn::PP< T > | |
| PLearn::PPointable | |
| PLearn::PPointableSet | |
| PLearn::PrecomputedKernel | A kernel that precomputes the kernel matrix as soon as setDataForKernelMatrix is called |
| PLearn::PrecomputedVMatrix | |
| PLearn::PreprocessingVMatrix | |
| PLearn::PricingTransactionPairProfitFunction | |
| PLearn::ProbabilitySparseMatrix | |
| PLearn::ProbSparseMatrix | |
| PLearn::ProbVector | |
| PLearn::ProcessingVMatrix | |
| PLearn::ProductRandomVariable | |
| PLearn::ProductTransposeVariable | Matrix product between matrix1 and transpose of matrix2 |
| PLearn::ProductVariable | Matrix product |
| PLearn::Profiler | |
| PLearn::Profiler::Stats | |
| PLearn::ProgressBar | This class will help you display progress of a calculation |
| PLearn::ProgressBarPlugin | Base class for pb plugins |
| PLearn::ProjectionErrorVariable | The first input is a set of n_dim vectors (possibly seen as a single vector of their concatenation) f_i, each in R^n The second input is a set of T vectors (possibly seen as a single vector of their concatenation) t_j, each in R^n The output is the following: sum_j min_{w_j} || t_j - sum_i w_{ji} f_i ||^2 where row w_j of w is optmized analytically and separately for each j |
| PLearn::PSMat | |
| PLearn::PStream | |
| PLearn::PStreamBuf | |
| PLearn::PTester | This code is deprecated, use PTester.h and PTester.cc instead |
| PLearn::QuadraticUtilityCostFunction | |
| PLearn::QuantilesStatsIterator | |
| PLearn::RandomElementOfRandomVariable | RandomVariable that is the element of the first parent RandomVariable indexed by the second parent RandomVariable |
| PLearn::RandomVar | We follow the same pattern as Var & Variable |
| PLearn::RandomVariable | |
| PLearn::RandomVarVMatrix | This is a convenient wrapping around the required data structures for efficient repeated sampling from a RandomVar |
| PLearn::Range | |
| PLearn::RangeVMatrix | Outputs scalar samples (length 1) starting at start, up to end (inclusive) with step. When end is reached it starts over again |
| PLearn::ReadAndWriteCommand | |
| PLearn::RealMapping | |
| PLearn::RealRange | Real range: i.e. one of ]low,high[ ; [low,high[; [low,high]; ]low,high] |
| PLearn::ReconstructionWeightsKernel | |
| PLearn::RegularGridVMatrix | |
| PLearn::RemapLastColumnVMatrix | |
| PLearn::RemoveDuplicateVMatrix | |
| PLearn::RemoveRowsVMatrix | Sees an underlying VMat with the specified rows excluded |
| PLearn::RepeatSplitter | |
| PLearn::ReshapeVariable | Variable that views another variable, but with a different length() and width() (the only restriction being that length()*width() remain the same) |
| PLearn::ResourceSemaphore | |
| PLearn::RGB | |
| PLearn::RGBImage | Uses top left coordinate system Pixel (i,j) is at row i, column j |
| PLearn::RGBImageDB | |
| PLearn::RGBImagesVMatrix | |
| PLearn::RGBImageVMatrix | |
| PLearn::RightPseudoInverseVariable | |
| PLearn::Row | |
| PLearn::RowAtPositionVariable | |
| PLearn::RowBufferedVMatrix | |
| PLearn::RowIterator | |
| PLearn::RowMapSparseMatrix< T > | |
| PLearn::RowMapSparseValueMatrix< T > | |
| PLearn::RowsSubVMatrix | |
| PLearn::RowSumVariable | Result is a single column that contains the sum of each row of the input |
| PLearn::RunCommand | |
| PLearn::RunObject | |
| PLearn::RVArray | An RVArray stores a table of RandomVar's |
| PLearn::RVArrayRandomElementRandomVariable | |
| PLearn::RVInstance | RVInstance represents a RandomVariable V along with a "value" v |
| PLearn::RVInstanceArray | |
| PLearn::ScaledConditionalCDFSmoother | |
| PLearn::ScaledGaussianKernel | Returns exp(-sum_i[(phi_i*(x1_i - x2_i))^2]/sigma^2) |
| PLearn::ScaledGeneralizedDistanceRBFKernel | Returns exp(-(sum_i phi_i*[abs(x1_i^a - x2_i^a)^b])^c) |
| PLearn::ScaledGradientOptimizer | |
| PLearn::ScaledLaplacianKernel | Returns exp(-(sum_i[abs(x1_i - x2_i)*phi_i])) |
| PLearn::Schema | |
| PLearn::SDBVMatrix | |
| PLearn::SDBVMField | |
| PLearn::SDBVMFieldAffine | Apply an affine transformation to the field: y = a*x+b |
| PLearn::SDBVMFieldAsIs | Pass through the value within the SDB (after conversion to real of the underlying SDB type) |
| PLearn::SDBVMFieldCodeAsIs | |
| PLearn::SDBVMFieldDate | Convert a date to fill 3 columns in the VMat: YYYY, MM, DD |
| PLearn::SDBVMFieldDateDiff | Difference between two dates ("source1-source2" expressed as an integer number of days, months, or years) |
| PLearn::SDBVMFieldDateGreater | Verifies if the date within the row is greater than a threshold date |
| PLearn::SDBVMFieldDay | |
| PLearn::SDBVMFieldDiscrete | A field that recodes its source field according to an OutputCoder object |
| PLearn::SDBVMFieldDivSigma | Just divide by standard deviation |
| PLearn::SDBVMFieldFunc1 | |
| PLearn::SDBVMFieldFunc2 | |
| PLearn::SDBVMFieldHasClaim | |
| PLearn::SDBVMFieldICBCClassification | |
| PLearn::SDBVMFieldICBCTargets | |
| PLearn::SDBVMFieldMonths | Computed year*12+(month-1) |
| PLearn::SDBVMFieldMultiDiscrete | |
| PLearn::SDBVMFieldNormalize | Normalize the field (subtract the mean then divide by standard dev) |
| PLearn::SDBVMFieldPosAffine | Take the positive part of the field, followed by affine transformation: y = a*max(x,0)+b |
| PLearn::SDBVMFieldRemapIntervals | |
| PLearn::SDBVMFieldRemapReals | |
| PLearn::SDBVMFieldRemapStrings | |
| PLearn::SDBVMFieldSignedPower | Do the following : y = x^a |
| PLearn::SDBVMFieldSource1 | A field that maps exactly 1 SDB field to a VMatrix segment (abstract) |
| PLearn::SDBVMFieldSource2 | A field that maps exactly 2 SDB fields to a VMatrix segment (abstract) |
| PLearn::SDBVMFieldSumClaims | |
| PLearn::SDBVMOutputCoder | |
| PLearn::SDBVMSource | A SDBVMSource represents a source for a value that can be either directly a field from a SDB or an already processed SDBVMField |
| PLearn::SDBWithStats | |
| PLearn::SelectColumnsVMatrix | Selects variables (columns) from a source matrix according to given vector of indices |
| PLearn::SelectedIndicesCmp< T > | |
| PLearn::SelectedOutputCostFunction | This allows to apply a costfunction on a single output element (and correponding target element) of a larger output vector, rather than on the whole vector |
| PLearn::SelectInputSubsetLearner | |
| PLearn::SelectRowsFileIndexVMatrix | |
| PLearn::SelectRowsVMatrix | Selects samples from a source matrix according to given vector of indices |
| PLearn::SemId | This class is defined in order to distinguish semaphore and shared memory id's from plain integers when constructing a Semaphore or a SharedMemory object |
| PLearn::SemiSupervisedProbClassCostVariable | |
| PLearn::semun | |
| PLearn::SentencesBlocks | |
| PLearn::SequentialLearner | |
| PLearn::SequentialModelSelector | |
| PLearn::SequentialSplitter | |
| PLearn::SequentialValidation | |
| PLearn::Set | |
| set | |
| PLearn::SetOption | |
| PLearn::SharedMemory< T > | |
| PLearn::SharpeRatioStatsIterator | |
| PLearn::ShellProgressBar | |
| PLearn::ShellScript | |
| PLearn::ShiftAndRescaleVMatrix | |
| PLearn::short_and_twobytes | |
| PLearn::SigmoidalKernel | Returns sigmoid(c*x1.x2) |
| PLearn::SigmoidPrimitiveKernel | Returns log(1+exp(c*x1.x2)) = primitive of sigmoidal kernel |
| PLearn::SigmoidVariable | |
| PLearn::SignVariable | Sign(x) = 1 if x>0, -1 if x<0, 0 if x=0, all done element by element |
| PLearn::SimpleDB< KeyType, QueryResult > | |
| PLearn::SimpleDBIndexKey< KeyType > | |
| PLearn::SmallVector< T, SizeBits, Allocator > | |
| PLearn::SMat< T > | |
| PLearn::SmoothedProbSparseMatrix | |
| PLearn::Smoother | |
| PLearn::SoftmaxLossVariable | |
| PLearn::SoftmaxVariable | |
| PLearn::SoftplusVariable | This is the primitive of a sigmoid: log(1+exp(x)) |
| PLearn::SoftSlopeIntegralVariable | |
| PLearn::SoftSlopeVariable | |
| PLearn::SortRowsVMatrix | Sort the samples of a VMatrix according to one (or more) given columns |
| PLearn::SourceKernel | |
| PLearn::SourceSampleVariable | |
| PLearn::SourceVariable | |
| PLearn::SourceVMatrix | |
| PLearn::SourceVMatrixSplitter | |
| PLearn::SparseMatrix | |
| PLearn::SparseVMatrix | |
| PLearn::SparseVMatrixRow | |
| PLearn::SpectralClustering | |
| PLearn::SpiralDistribution | |
| PLearn::Splitter | |
| PLearn::SquaredErrorCostFunction | ********************************************************* The following 'kernels' are rather used as cost functions |
| PLearn::SquareRootVariable | |
| PLearn::SquareVariable | |
| PLearn::StackedLearner | |
| PLearn::StatefulLearner | |
| PLearn::StaticInitializer | A StaticInitializer is typically declared as a static member of a class, and given a parameter that is a static initialization function for said class |
| PLearn::StatsCollector | |
| PLearn::StatsCollectorCounts | |
| PLearn::StatsItArray | |
| PLearn::StatsIterator | |
| PLearn::StatSpec | The specification of a statistic to compute (as can be specified as a string in PTester) |
| PLearn::StddevStatsIterator | |
| PLearn::StderrStatsIterator | |
| PLearn::StdPStreamBuf | |
| PLearn::StochasticRandomVariable | |
| PLearn::Storage< T > | |
| streambuf | |
| PLearn::StringFieldMapping | |
| PLearn::StringTable | |
| PLearn::StrTableVMatrix | |
| PLearn::SubInputVMatrix | |
| PLearn::SubMatTransposeVariable | |
| PLearn::SubMatVariable | |
| PLearn::SubsampleVariable | A subsample var; equals subrample(input, the_subsamplefactor) |
| PLearn::SubVecRandomVariable | Y = sub-vector of X starting at position "start", of length "value->length()" |
| PLearn::SubVMatrix | |
| PLearn::SumAbsVariable | |
| PLearn::SumOfVariable | |
| PLearn::SumOverBagsVariable | |
| PLearn::SumSquareVariable | |
| PLearn::SumVariable | |
| PLearn::Symbol | |
| PLearn::TangentLearner | |
| PLearn::TanhVariable | |
| PLearn::TemporalHorizonVMatrix | This VMat delay the last targetsize entries of an underlying VMat by a certain horizon |
| PLearn::TestDependenciesCommand | |
| PLearn::TestDependencyCommand | |
| PLearn::TestingLearner | |
| PLearn::TestInTrainSplitter | |
| PLearn::TestMethod | |
| PLearn::TextProgressBarPlugin | Simple plugin for displaying text progress bar |
| PLearn::TextSenseSequenceVMatrix | This class handles a sequence of words/sense tag/POS triplets to present it as target words and their context |
| PLearn::ThresholdVMatrix | |
| PLearn::TimesColumnVariable | Multiplies each column of a matrix var elementwise with a single column variable |
| PLearn::TimesConstantVariable | Multiplies a matrix var by a scalar constant |
| PLearn::TimesRowVariable | Multiplies each row of a matrix var elementwise with a single row variable |
| PLearn::TimesScalarVariable | Multiplies a matrix var by a scalar var |
| PLearn::TimesVariable | Multiplies 2 matrix vars of same size elementwise |
| PLearn::TinyVector< T, N, TTrait > | |
| PLearn::TinyVectorTrait< T > | |
| PLearn::TinyVectorTrait< char > | |
| PLearn::TinyVectorTrait< int > | |
| PLearn::TinyVectorTrait< unsigned char > | |
| PLearn::TinyVectorTrait< unsigned int > | |
| PLearn::TMat< T > | |
| PLearn::TMatColRowsIterator< T > | Model of the Random Access Iterator concept for iterating through a single column of a TMat, one row at a time |
| PLearn::TMatElementIterator< T > | |
| PLearn::TMatRowsAsArraysIterator< T > | Model of the Random Access Iterator concept for iterating through the ROWS of a TMat |
| PLearn::TMatRowsIterator< T > | Model of the Random Access Iterator concept for iterating through the ROWS of a TMat |
| PLearn::TmpFilenames | |
| PLearn::ToBagSplitter | |
| PLearn::TopNI< T > | |
| PLearn::Train | |
| PLearn::TrainTestBagsSplitter | |
| PLearn::TrainTestSplitter | |
| PLearn::TrainValidTestSplitter | |
| PLearn::TransposeProductVariable | Matrix product between transpose of matrix1 and matrix2 |
| PLearn::TransposeVMatrix | |
| PLearn::tRule | |
| PLearn::TTensor< T > | |
| PLearn::TTensorElementIterator< T > | |
| PLearn::TTensorSubTensorIterator< T > | |
| PLearn::TVec< T > | |
| PLearn::TypeFactory | |
| PLearn::TypeMapEntry | |
| PLearn::TypeTraits< T > | |
| PLearn::TypeTraits< Array< T > > | |
| PLearn::TypeTraits< list< T > > | |
| PLearn::TypeTraits< map< T, U > > | |
| PLearn::TypeTraits< pair< T, U > > | |
| PLearn::TypeTraits< PP< T > > | |
| PLearn::TypeTraits< RealMapping > | |
| PLearn::TypeTraits< SetOption > | |
| PLearn::TypeTraits< string > | |
| PLearn::TypeTraits< T * > | |
| PLearn::TypeTraits< TMat< T > > | |
| PLearn::TypeTraits< TVec< T > > | |
| PLearn::TypeTraits< vector< T > > | |
| PLearn::UCISpecification | |
| PLearn::UnaryHardSlopeVariable | |
| PLearn::UnarySampleVariable | |
| PLearn::UnaryVariable | |
| PLearn::UnconditionalDistribution | |
| PLearn::UnequalConstantVariable | A scalar var; equal 1 if input1!=c, 0 otherwise |
| PLearn::UnfoldedFuncVariable | |
| PLearn::UnfoldedSumOfVariable | |
| PLearn::UniformDistribution | |
| PLearn::UniformizeVMatrix | |
| PLearn::UniformSampleVariable | |
| PLearn::UniformVMatrix | |
| PLearn::UpsideDownVMatrix | |
| PLearn::Var | |
| PLearn::VarArray | |
| PLearn::VarArrayElementVariable | Variable that is the element of the input1 VarArray indexed by the input2 variable |
| PLearn::VarColumnsVariable | |
| PLearn::VarElementVariable | |
| PLearn::Variable | |
| PLearn::VarMeasurer | |
| PLearn::VarRowsVariable | |
| PLearn::VarRowVariable | Variable that is the row of the input1 variable indexed by the input2 variable |
| PLearn::VecCompressor | |
| PLearn::VecElementVariable | Variable that is the element of vector vec indexed by variable input |
| PLearn::VecExtendedVMatrix | |
| PLearn::VecStatsCollector | |
| vector | |
| PLearn::VMat | |
| PLearn::VMatCommand | |
| PLearn::VMatLanguage | |
| PLearn::VMatrix | |
| PLearn::VMatrixFromDistribution | |
| PLearn::VMField | VMField contains a fieldname and a fieldtype |
| PLearn::VMFieldStat | This class holds simple statistics about a field |
| PLearn::VVec | A VVec is a reference to a row or part of a row (a subrow) of a VMatrix |
| PLearn::VVMatrix | This class is a wrapper for a .vmat VMatrix |
| PLearn::WeightedCostFunction | A costfunction that allows to reweight another costfunction (weight being last element of target) Returns target.lastElement() * costfunc(output,target.subVec(0,target.length()-1)); |
| PLearn::WeightedSumSquareVariable | |
| PLearn::WordNetOntology | |
| PLearn::YMDDatedVMatrix | |