Exercise 2.2 โ Time-Travel + Clone Lab
Hands-on Time Travel and Zero-Copy Clone using the meter-reads scenario. 20 minutes. Leaves you with lived experience of the two features the JD names specifically.
Prereq
Exercise 2.1 complete: LEARN_WH + LEARN_DB.UTILITY.meter_reads exists with 1M synthetic rows.
Scenario 1 โ Undo a bad DELETE
USE LEARN_DB.UTILITY; USE WAREHOUSE LEARN_WH;
-- Before state: how many rows?
SELECT COUNT(*) FROM meter_reads; -- 1,000,000
-- The mistake: delete everything for zip 97124. Hit the wrong filter.
DELETE FROM meter_reads WHERE premise_zip = '97124';
SELECT COUNT(*) FROM meter_reads; -- fewer rows
-- Recover using Time Travel: what did the table look like 2 minutes ago?
SELECT COUNT(*) FROM meter_reads AT(OFFSET => -120);
-- Restore the deleted rows (the pre-delete state MINUS the current state gives just the deleted)
INSERT INTO meter_reads
SELECT * FROM meter_reads AT(OFFSET => -120)
MINUS
SELECT * FROM meter_reads;
SELECT COUNT(*) FROM meter_reads; -- back to 1,000,000
Internalize: 20 seconds of SQL restored 1M deleted rows. In any other database, this would be a restore-from-backup disaster. This is the interview anecdote.
Scenario 2 โ Snapshot before a risky operation
-- About to run a VEE-rewrite that mutates quality_flag on many rows.
-- Take a clone as a safety.
CREATE TABLE meter_reads_pre_vee CLONE meter_reads;
-- Clone creation: check how fast it was (History tab in Snowsight). Nearly instant.
SELECT COUNT(*) FROM meter_reads_pre_vee; -- 1,000,000
-- Now do something disruptive
UPDATE meter_reads SET quality_flag = 'E' WHERE premise_zip LIKE '970%';
-- Compare
SELECT
(SELECT COUNT(*) FROM meter_reads WHERE quality_flag = 'E') AS current_E_count,
(SELECT COUNT(*) FROM meter_reads_pre_vee WHERE quality_flag = 'E') AS pre_vee_E_count;
-- If the VEE ran wrong, swap back (atomic in Snowflake):
-- ALTER TABLE meter_reads SWAP WITH meter_reads_pre_vee;
-- All good โ drop the safety
DROP TABLE meter_reads_pre_vee;
Scenario 3 โ Dev environment from prod
-- Pretend our LEARN_DB is prod. Create a developer sandbox.
CREATE DATABASE LEARN_DB_DEV CLONE LEARN_DB;
-- How fast was that? Check Query History.
-- Inspect the cloned data
SELECT COUNT(*) FROM LEARN_DB_DEV.UTILITY.meter_reads;
-- Mutate the dev copy freely
TRUNCATE TABLE LEARN_DB_DEV.UTILITY.meter_reads;
-- Source is untouched
SELECT COUNT(*) FROM LEARN_DB.UTILITY.meter_reads; -- still 1,000,000
SELECT COUNT(*) FROM LEARN_DB_DEV.UTILITY.meter_reads; -- 0
-- Drop when done
DROP DATABASE LEARN_DB_DEV;
Scenario 4 โ Point-in-time analysis
-- Let some time pass then simulate data changing.
DELETE FROM meter_reads WHERE premise_zip LIKE '971%' LIMIT 10000;
-- "What did the data look like at 10am today? Give me a snapshot."
CREATE TABLE meter_reads_at_10am CLONE meter_reads
AT(TIMESTAMP => DATEADD(hour, -1, CURRENT_TIMESTAMP())); -- adjust as needed
SELECT COUNT(*) FROM meter_reads_at_10am;
DROP TABLE meter_reads_at_10am;
Observe: storage billing
Admin โ Cost Management โ Consumption. Look at "Storage" chart. The 1M-row table uses a few MB. Your clones added ~0 storage until you deleted rows in the source (which kept the "old" partitions alive for Time Travel).
Cleanup
-- Shrink time-travel retention to recover storage quickly (optional)
ALTER TABLE LEARN_DB.UTILITY.meter_reads SET DATA_RETENTION_TIME_IN_DAYS = 0;
ALTER TABLE LEARN_DB.UTILITY.meter_reads SET DATA_RETENTION_TIME_IN_DAYS = 1;
ALTER WAREHOUSE LEARN_WH SUSPEND;
Save to notepad
- Time Travel recovery pattern (OFFSET query โ INSERT MINUS)
- Clone-as-safety pattern (CLONE โ risky op โ SWAP or DROP)
- Sandbox-from-prod pattern (CREATE DATABASE dev CLONE prod)