CSV Upload

In some cases, you may already have a CSV file that you want to use. This can be raw sensor data with labels or features that you have cached. We separate this into DataFiles and Featurefiles which we explain below.

DataFiles

A DataFile allows you to upload sensor data into a pipeline for testing, rather than using Project Capture data. Files must be in CSV format. Using datafile is convenient when:

  1. You have test data that you want to test against a model without adding the file to your Project Capture list.

Examples:

client.list_datafiles()

# if you want to upload directly from a csv file, force=True overwrites the file on the server if it exists.
client.upload_data_file(name, path, force=True)

# if you already have a dataframe
client.upload_dataframe(name, dataframe)

FeatureFiles

A featurefile can be used to directly load data into a pipeline, rather than querying Project Capture data. Files must be in CSV format. Using featurefiles is convenient when:

  1. You want to cache features locally and then use those as input into the training algorithm so you can avoid running previous steps in the pipeline.

Examples:

client.list_featurefiles()

# if you want to upload directly from a csv file
client.upload_feature_file(name, path)

# if you already have a dataframe
client.upload_dataframe(name, dataframe)
class sensiml.datamanager.featurefiles.FeatureFiles(connection: Connection, project: Project)

Base class for a collection of featurefiles.

build_datafile_list() dict

Populates the function_list property from the server.

build_featurefile_list() dict

Populates the function_list property from the server.

build_full_list() dict

Populates the function_list property from the server.

create_featurefile(filename: str, path: str, is_features: bool = False, label_column: str = '') FeatureFile

Creates a featurefile object from the filename and path.

Parameters
  • filename (str) – desired name of the featurefile on the server, must have a .csv or .arff extension

  • path (str) – full local path to the file, including the file’s local name and extension

Returns

featurefile object

Raises

FeatureFileExistsError, if the featurefile already exists on the server

get_by_name(filename: str) FeatureFile

Gets a featurefile or datafile from the server referenced by name.

Parameters

filename – name of the featurefile as stored on the server

Returns

featurefile object or None if it does not exist

get_featurefile(uuid) FeatureFile

Gets a list of all featurefiles in the project.

Returns

list (featurefiles)

get_featurefiles() list[sensiml.datamanager.featurefile.FeatureFile]

Gets a list of all featurefiles in the project.

Returns

list (featurefiles)

new_featurefile() FeatureFile

Initializes a new featurefile object, but does not insert it.

class sensiml.datamanager.featurefile.FeatureFile(connection: Connection, project: Project, name: str = '', path: str = '', is_features: bool = True, uuid: Optional[str] = None, label_column: str = '', number_rows: Optional[int] = None)

Base class for a featurefile object.

compute_analysis(analysis_type: str = 'UMAP', **kwargs) Response

Calls the REST API to compute the analysis for the feature file.

Parameters

analysis_type (str) – the type of clustering analysis, ie “UMAP” (default), “TSNE” and “PCA”.

Kwargs:

shuffle_seed (int): random seed to shuffle and resample feature vector analysis_seed (int): random state of the analysis (default is 0) n_neighbor (int): The size of local neighborhood (in terms of number of neighboring sample points) used for manifold approximation. If not specified, default is the number of unique labels. n_components (int): The dimension of the output result. Default is 2. ‘n_components’ is adjusted based on the method, dimension of the feature vector and number of samples n_sample (int): Maximum number of output samples. Default is 1000.

Returns

A JSON response containing the metadata of the generated analysis.

Example

>>> feature_file = client.get_featurefile(<feature-file uuid>)
>>> response = feature_file.compute_analysis(analysis_type="PCA", shuffle_seed=13, n_components=5)
>>> response.json()
property created_at: datetime

Date of the Pipeline creation

delete() Response

Calls the REST API and deletes the featurefile from the server.

download() Response

Calls the REST API and retrieves the featurefile’s binary data.

Returns

featurefile contents

download_json() Response

Calls the REST API and retrieves the featurefile’s json data.

Returns

featurefile contents as json

property filename: str

The name of the file as stored on the server

Note

Filename must contain a .csv or .arff extension

insert() Response

Calls the REST API to insert a new featurefile.

property is_features: bool

If this is a DataFile or FeatureFile

list_analysis()

Calls the REST API and retrieve list of computed analysis for featurefile

Returns

JSON response holding the list of all computed analysis

refresh() Response

Calls the REST API and populate the featurefile’s properties from the server.

update() Response

Calls the REST API to update the featurefile’s properties on the server.