1+1 > 2 Collaborate in real time.

Lodestar is data scientist-centric. Run experiments on the live dataset while the experts are annotating it. Designed with video as a first-class citizen and optimized for dataset design and scale, our customers accomplish in weeks what used to take months.

Schedule a demo

Continuous collaboration

Data scientists experiment while the subject matter experts annotate on a single source of truth.

Data scientist workbench

Train, test, and perform statistical analyses with in-platform GPUs and Jupyter notebooks.

Lodestar API

A pythonic API enables easy manipulation of the datasets and connection to your customer workflows.

Fast random pixel access

Built for scale, pixel-level access to any frame in a few hundred milliseconds.

Pixel-native data structure

A unique data structure enables experiments at the pixel level without duplicating the raw pixels.

Versioning

Track each label with object-based versioning.

Single source of truth collaboration

Our unique database gives data scientists and other users the ability to view, tag, and experiment on data without needing to modify it. Once your videos and images are uploaded, add metadata at the pixel level.

Expand the team

Bring in experts with one click

Bring in experts with one click

Experts are difficult to find and can be located outside of your company or in a remote area. We have developed an easy-to-share web interface allowing you to bring all those remote talents in to collaborate on a single source of truth.

Immediate results

Track progress with each label

Track progress with each label

Continuous training, again and again. Every label you commit contributes to the next cycle of automatic training. By monitoring the internal model cross validation matrix and other parameters that you can access via the pyLucy API, you get immediate understanding of your labeling progress.

Dataset design

Your model is only as good as the data. Get early insight into the quality of your data and tune your labeling requirements while you label.

Learn more about the integrated Jupyter

Model validated dataset

Get analytic insights on your dataset while annotating. Your dataset is constantly being validated by training the system’s internal model. Predictive labels and confidence graph provide visual feedback while the data is being annotated. Cross-validation mean Average Precision (mAP) provides high-level insights into dataset quality.

Model validated dataset

Automated QA Process

Build hypotheses on the generated datasets and test them. Filter a specific class of labels and test how it impacts your model accuracy, then follow up with new instructions to the labelers, all in real time.

Automated QA Process

Dataset debugging

Slice and dice your dataset without moving a pixel with millisecond access to any frame. Run experiments on your labels with programmatic flexibility: jump around frames, manipulate pixels, filter out unwanted labels and check for underrepresented sub-classes.

Dataset debugging