Knowledge Packs
The model that gets generated for your edge device is called a Knowledge Pack. A Knowledge Pack contains the device firmware code for detecting events in your application and will be flashed to your device. The Knowledge Pack contains all the information about how the model was trained, the metrics for the trained model, as well as the full trained pipeline model.
Download Knowledge Pack
To download a Knowledge Pack, you will need to create a configuration for the download.
The first part of the configuration is the kb_description, which describes the models that will be part of the Knowledge Pack. The format is as follows:
kb_description = {
"MODEL_1": {
"source": "<CAPTURE CONFIG UUID>",
"uuid": "<Model UUID>"
},
}
Where model name can be whatever you choose. Source is the capture configuration that the model should use (this is used to set up the correct sensors and sample rate). uuid is the model UUID to use.
To get the UUID of the capture configuration and Knowledge Pack for a particular project use the following:
client.project.list_knowledgepacks()
client.list_capture_configurations()
Next we will create the config for the download. To see a list of available target platforms along with their application and output_options use:
client.platforms_v2()
You can generate the template configuration by using the following
config = client.platforms_v2.get_platform_by_name('x86 GCC Generic').get_config()
replace the “x86 GCC Generic” platform name with the platform you would like to download.
print(config)
{'target_platform': '26eef4c2-6317-4094-8013-08503dcd4bc5',
'test_data': '',
'debug': False,
'output_options': ['serial'],
'application': 'SensiML AI Model Runner',
'target_processor': '822581d2-8845-4692-bcac-4446d341d4a0',
'target_compiler': '62aabe7e-4f5d-4167-a786-072e468dc158',
'float_options': '',
'selected_platform_version': ''}
config["kb_description"] = kb_description
And finally, we can now download the model as a library source (if supported by your subscription) or binary (if supported by your platform).
kp = client.get_knowledgepack("<MODEL UUID>")
kp.download_library_v2(config=config)
# kp.download_binary_v2(config=config)
# kp.download_source_v2(config=config)
Multi Model Knowledge Pack
If you want to download multiple Knowledge Packs at once you will need to use the programmatic interface. Models can either be Parent or Children Models. Parent models require a source. Children models require a segmenter_from and parent flag. Optional flags are results which allows a Child model to be called depending on the output of a Parent model.
The model graphs are defined in the kb_description. The format for multiple parent models is as follows:
{
"MODEL_1": {
"source": "<CAPTURE CONFIG UUID>",
"uuid": "<Model UUID>"
},
"MODEL_2": {
"source": "<CAPTURE CONFIG UUID>",
"uuid": "<Model UUID>"
},
"MODEL_3": {
"source": "<CAPTURE CONFIG UUID>",
"uuid": "<Model UUID>"
}
}
The format for multiple parent models and with multiple child models is as follows:
{
"PARENT_1": {
"source": "<CAPTURE CONFIG UUID>",
"uuid": "<Model UUID>",
"results": {
"1": "CHILD_1",
"2": "CHILD_2"
}
},
"PARENT_2": {
"source": "<CAPTURE CONFIG UUID>",
"uuid": "<Model UUID>",
},
"CHILD_1": {
"uuid": "<Model UUID>",
"parent": "PARENT_1",
"results": {
"1": "Child 4",
},
"segmenter_from": "parent"
},
"CHILD_2": {
"uuid": "<Model UUID>",
"parent": "PARENT_1",
"segmenter_from": "parent"
},
"CHILD_3": {
"uuid": "<Model UUID>",
"parent": "PARENT_1",
"segmenter_from": "parent"
},
"CHILD_4": {
"uuid": "<Model UUID>",
"parent": "PARENT_1",
"segmenter_from": "parent"
}
}
Import and Export Model
In some instances you may want to import or export the model you have created. To support this you can you use the create and export API.
To export a model, use the export() API shown below.
import json
kp = client.get_knowledgepack("<MODEL UUID>")
exported_model = kp.export()
json.dump(exported_model, open('exported-model.json','w'))
You can then import that model and upload it to the server under a new name.
from sensiml.datamanager.knowledgepack import KnowledgePack
kp = KnowledgePack(client._connection, client.project.uuid)
kp.initialize_from_dict(json.load(open("exported-model.json",'r')))
kp._name = "Imported Model"
kp.create()
- class sensiml.datamanager.knowledgepack.KnowledgePack(connection: Connection, project_uuid: str, sandbox_uuid: str = '')
Base class for a KnowledgePack
- add_feature_only_generators(generators: list[dict])
Adds feature only generators to a knowledge pack. You can add the features and then create a new knowledgepack. The new KnowledgePack will generate these features, but not use them as part of the classifier.
- new_generators = [{‘family’: None,
‘inputs’: {‘columns’: [‘channel_0’]}, ‘num_outputs’: 1, ‘function_name’: ‘100th Percentile’, “subtype”: “Stats”}]
kp.add_feature_only_generators(new_generators) kp._name = “KP with Extra Features” kp.create()
- property class_map: dict
A summary of the integer classes/categories used by the KnowledgePack and the corresponding application categories
- property cost_dict: dict
A summary of device costs incurred by the KnowledgePack
- property cost_report: str
A printed tabular report of the device cost incurred by the KnowledgePack
- property cost_report_json: str
A JSON report of the Knowledge Pack cost summary
- cost_resource_summary(processor_uuid: Optional[str] = None, hardware_accelerator: Optional[str] = None)
A summary of resources and time needed in a classification from a Knowledge Pack
- create() KnowledgePack
Create a new knowledge pack on the server using the internal data for this model
- Returns
KnowledgePack object
- delete() Response
Deletes the knowledgepack
- Returns
result of executed pipeline, specified by the sandbox (dict): execution summary including execution time and whether cache was used for each step; also contains a feature cost table if applicable
- Return type
(DataFrame or ModelResultSet)
- download_binary_v2(folder: str = '', run_async: bool = True, platform: Optional[ClientPlatformDescription] = None, renderer=None, *args, **kwargs)
Calls the server to generate full binary image based on device config.
- Parameters
folder (str) – Folder to save to if not generating a link
- Returns
Denoting success, or link to file download
- Return type
str
- download_library_v2(folder: str = '', run_async: bool = True, platform: Optional[ClientPlatformDescription] = None, renderer=None, *args, **kwargs)
Calls the server to generate static library image based on device config.
- Parameters
folder (str) – Folder to save to if not generating a link
- Returns
Denoting success, or link to file download
- Return type
str
- download_source_v2(folder: str = '', run_async: bool = True, platform: Optional[ClientPlatformDescription] = None, renderer=None, *args, **kwargs)
Calls the server to generate static library image based on device config.
- Parameters
folder (str) – Folder to save to if not generating a link
- Returns
Denoting success, or link to file download
- Return type
str
- export() dict
Export a Knowledge Pack Model
- Returns
Knowledge Pack Export Dict
- property feature_summary: dict
A summary of the features generated by the KnowledgePack
- get_report(report_type: str, processor_uuid: Optional[str] = None, hardware_accelerator: Optional[str] = None)
Sends a request for a report to the server and returns the result.
- Parameters
report_type (string) – string name of report, ex: ‘cost’
- Returns
string representation of desired report
- Return type
(string)
- property knowledgepack_description: dict
Description of knowledgepack. It is for the hierarchical model which created by Autosense
- property knowledgepack_summary: dict
A summary of device costs incurred by the KnowledgePack
- property model_configuration: dict
Model Configuration
- property model_parameters: dict
The model’s parameters
- property model_results: dict
The model results associated with the KnowledgePack (in JSON form)
- property neuron_array: dict
The model’s neuron array
- property pipeline_summary: list[dict]
A summary specification of the pipeline which created the KnowledgePack
- property query_summary: dict
A summary specification of the query used by the pipeline which created the KnowledgePack
- recognize_features(data: dict) DataFrame
Sends a single vector of features to the KnowledgePack for recognition.
- Parameters
data (dict) –
dictionary containing
a feature vector in the format ‘Vector’: [126, 32, 0, …]
’DesiredResponses’ indicating the number of neuron responses to return
- Returns
dictionary containing
CategoryVector (list): numerical categories of the neurons that fired
MappedCategoryVector (list): original class categories of the neurons that fired
NIDVector (list): ID numbers of the neurons that fired
DistanceVector (list): distances of the feature vector to each neuron that fired
- Return type
(dict)
- recognize_signal(capture: Optional[str] = None, datafile: Optional[str] = None, test_plan: Optional[str] = None, stop_step: Optional[int] = False, segmenter: bool = True, platform: str = 'emulator', get_result: bool = True, kb_description: Optional[dict] = None, compare_labels: bool = False, renderer: Optional[str] = None) DataFrame
Sends a DataFrame of raw signals to be run through the feature generation pipeline and recognized.
- Parameters
capturefile (str) – The name of a file uploaded through the data capture lab
datafile (str) – The name of an uploading datafile
platform (str) – “emulator” or “cloud”. The “emulator” will run compiled c code giving device exact results, the cloud runs similarly to training providing more flexibility in returning early results by setting the stop step.
stop_step (int) – for debugging, if you want to stop the pipeline at a particular step, set stop_step to its index
compare_labels (bool) – If there are labels for the input dataframe, use them to create a confusion matrix
segmenter (bool or FunctionCall) – to suppress or override the segmentation algorithm in the original pipeline, set this to False or a function call of type ‘segmenter’ (defaults to True)
lock (bool, True) – If True, waits for the result to return before releasing the ipython cell.
- Returns
dictionary of results and summary statistics from the executed pipeline and recognition
- Return type
(dict)
- retrieve(silent: bool = False)
Gets the result of a prior asynchronous execution of the sandbox.
- Returns
result of executed pipeline, specified by the sandbox (dict): execution summary including execution time and whether cache was used for each step; also contains a feature cost table if applicable
- Return type
(DataFrame or ModelResultSet)
- property reverse_class_map: dict
A summary of the category/integer class used by the KnowledgePack and the corresponding application categories
- save(name) Response
Rename a knowledge pack
- Returns
result of executed pipeline, specified by the sandbox (dict): execution summary including execution time and whether cache was used for each step; also contains a feature cost table if applicable
- Return type
(DataFrame or ModelResultSet)
- property sensor_summary: dict
A summary of sensor streams used by the KnowledgePack
- stop_recognize_signal()
Sends a kill signal to a pipeline
- property training_metrics: dict
The training metrics associated with the KnowledgePack
- property transform_summary: dict
A summary of transform parameters used by the KnowledgePack