Exercise 1.1 β€” Map a Prior ETL Project to a Modern Stack

A short reflection + translation exercise. If you've worked on a data-ingest pipeline earlier in your career, you've done DE β€” you just called it something else. This exercise pulls that forward so you can tell the interview story with confidence.

Time: 20–30 minutes. Output: a paragraph you can actually use in an interview: "Yes, I've built data pipelines. A prior project…"

The prompt

Pick a past project you worked on that moved data from a source (files, database, API) into a database or warehouse β€” or imagine a canonical one if no past project fits. Then re-tell it using the modern DE stack. Fill in each box below. Notes auto-save to your notepad tagged module-01-orientation.

Step 1 β€” Extract

Then (classic approach)

Seed: Scheduled script runs on a cadence. SFTP/HTTP/API pull to pick up new data. Validates file arrived, checksums, kicks off next step.

Now (Snowflake + AWS, modern)

Step 2 β€” Load (raw)

Then

Seed: A bulk-load command into a staging table. Row counts verified against source's "expected rows" header.

Now

Step 3 β€” Transform

Then

Seed: SQL scripts in a specific run order. Business rules in SQL, some in Python. Manual QA on the output before publishing.

Now

Step 4 β€” Orchestration

Then

Seed: Cron. Success/failure emails. Bash glue.

Now

Step 5 β€” Ops / observability

Then

Seed: Runbook, pager, log files in /var/log.

Now

Step 6 β€” Version control & deploy

Then

Seed: Scripts in SVN or Git, manual deploy.

Now

Interview talking-point

Now, in 3–4 sentences, tell the story of that project as if the interviewer asked "Do you have experience building ELT pipelines?" β€” using the modern language.