Datasets store tabular data in SQL tables—perfect for sensor logs, transactions, experiment data, and more. They give you:
Each dataset automatically exposes a REST API, generated from its schema, so you can query or update data programmatically right away.
Choose the method that fits your workflow (web UI, API, or client library).
When you upload a CSV file, Ouro will automatically convert it into a dataset. Ouro will automatically infer the schema of the dataset columns and data types.
Using the Python SDK, you can read your data with Pandas DataFrames and upload it to Ouro.
import pandas as pd
df = pd.read_csv('path/to/my_file.csv')
dataset = ouro.datasets.create(data=df, name='my_dataset', visibility='public')
For more control over the schema, you can provide a CREATE TABLE
statement.
CREATE TABLE datasets.my_dataset (
id INTEGER PRIMARY KEY,
name VARCHAR(255),
age INTEGER,
email VARCHAR(255)
);
The schema method is not fully supported yet. Load data in a follow‑up step (loading tools and docs coming soon).
Once you've added your data to Ouro, you will automatically get a visualization based on the structure of the dataset.
Custom queries and visualizations are on the roadmap.
Working with your data is just as easy as adding it.
data = ouro.datasets.load("my_dataset")
For larges datasets that can't all be loaded at once, we expose a SQL interface for fine-grained queries.
Datasets turn raw tables into living assets—queryable, shareable, and ready for analysis the moment they're created.