Introduction
Get started with TimescaleDB on Google Axion C4A
Create a firewall rule for Grafana/TimescaleDB
Create a Google Axion C4A Arm virtual machine on GCP
Set up TimescaleDB on Arm64
Ingest real-time sensor data on Arm64
Install Grafana and configure the TimescaleDB data source
Build a live sensor temperature dashboard
Next Steps
In this section, you simulate real-time sensor data using Python and continuously ingest it into TimescaleDB running on an Arm64 VM. This creates a live time-series data stream that can later be visualized using Grafana.
Python Sensor Generator
|
v
TimescaleDB Hypertable
This architecture mirrors real-world IoT and telemetry pipelines.
cd $HOME
sudo zypper install -y \
python3 \
python3-pip \
python3-psycopg2
Verify psycopg2 is installed correctly:
python3 - <<EOF
import psycopg2
print("psycopg2 OK")
EOF
The output is similar to:
psycopg2 OK
Connect to the sensors database and create the sensor_data hypertable:
sudo -u postgres psql sensors
CREATE TABLE sensor_data (
time TIMESTAMPTZ NOT NULL,
sensor_id TEXT NOT NULL,
temperature DOUBLE PRECISION
);
SELECT create_hypertable('sensor_data', 'time');
Press Ctrl+D to exit back into the SSH shell.
The following Python script simulates multiple sensors sending readings every two seconds and inserts them into TimescaleDB.
Create a new Python file called sensor_ingest.py and add the following code to the file:
import time
import random
import psycopg2
from datetime import datetime
conn = psycopg2.connect(
dbname="sensors",
user="postgres",
host="localhost"
)
cur = conn.cursor()
sensors = ["sensor-1", "sensor-2", "sensor-3"]
while True:
cur.execute(
"INSERT INTO sensor_data VALUES (%s, %s, %s)",
(
datetime.utcnow(),
random.choice(sensors),
round(random.uniform(20, 35), 2)
)
)
conn.commit()
time.sleep(2)
Start the ingestion process as a background job so it continues running even after you close the terminal:
nohup python3 sensor_ingest.py > ingest.log 2>&1 &
This ensures the sensor generator continues running even after you close the terminal.
ps -ef | grep sensor_ingest.py
The output is similar to:
gcpuser 5398 2841 0 08:55 pts/0 00:00:00 python3 sensor_ingest.py
gcpuser 5401 2841 0 08:55 pts/0 00:00:00 grep --color=auto sensor_ingest.py
sudo -u postgres psql sensors -c "SELECT COUNT(*) FROM sensor_data;"
The output is similar to:
gcpuser@tsdb-suse-arm64:~> sudo -u postgres psql sensors -c "SELECT COUNT(*) FROM sensor_data;"
count
-------
14
(1 row)
gcpuser@tsdb-suse-arm64:~> sudo -u postgres psql sensors -c "SELECT COUNT(*) FROM sensor_data;"
count
-------
15
(1 row)
gcpuser@tsdb-suse-arm64:~> sudo -u postgres psql sensors -c "SELECT COUNT(*) FROM sensor_data;"
count
-------
16
(1 row)
These steps make TimescaleDB production-ready.
Connect to the sensors database and create an index optimized for time-range scans by sensor:
sudo -u postgres psql sensors
Issue the following SQL command:
CREATE INDEX ON sensor_data (sensor_id, time DESC);
This index improves Grafana query performance for time-range scans.
Automatically remove data older than seven days:
SELECT add_retention_policy(
'sensor_data',
INTERVAL '7 days'
);
This prevents disk exhaustion and runs automatically in the background.
Precompute hourly averages per sensor for faster reporting:
CREATE MATERIALIZED VIEW sensor_hourly_avg
WITH (timescaledb.continuous) AS
SELECT
time_bucket('1 hour', time) AS bucket,
sensor_id,
AVG(temperature) AS avg_temp
FROM sensor_data
GROUP BY bucket, sensor_id;
Automate hourly aggregate refresh every five minutes for near real-time analytics:
SELECT add_continuous_aggregate_policy(
'sensor_hourly_avg',
INTERVAL '1 day',
INTERVAL '1 hour',
INTERVAL '5 minutes'
);
The table below explains the three interval parameters:
| Setting | Meaning |
|---|---|
| 1 day | Recompute last day |
| 1 hour | Skip newest data |
| 5 min | Refresh interval |
SELECT * FROM sensor_hourly_avg LIMIT 5;
SELECT COUNT(*) FROM sensor_data;
Ensures ingestion and aggregation are running correctly and data is available for queries.
The output is similar to:
postgres=# SELECT * FROM sensor_hourly_avg LIMIT 5;
bucket | sensor_id | avg_temp
------------------------+-----------+-------------------
2026-02-17 08:00:00+00 | sensor-1 | 26.6380487804878
2026-02-17 08:00:00+00 | sensor-2 | 27.21
2026-02-17 08:00:00+00 | sensor-3 | 28.13413793103448
(3 rows)
postgres=# SELECT COUNT(*) FROM sensor_data;
count
-------
2466
(1 row)
Press Ctrl+D to exit.
Let’s set a password for postgres so Grafana can connect in the next section:
sudo -u postgres psql
Then enter the new password:
\password postgres
Save the password — you’ll need it when configuring the Grafana data source. Press Ctrl+D to exit.
You’ve successfully:
Next, you’ll install Grafana, configure TimescaleDB as a data source, and build a live sensor temperature dashboard to visualize the real-time data you’re ingesting.