Projects
I was very close to uploading test data to BigQuery but hit errors when joining the users table, since it lives in a different database than the cash_movement table. After learning that some ID fields are varchar(16) because they store UUIDs converted to binary(16) for space efficiency, I adjusted the data pipeline scripts to accommodate this and successfully uploaded test data to BigQuery. I validated the data and it looks good; I just need Thomas from Insights to review the new approach.
I added a search index on the notes field in the cash_movement BigQuery table to improve query performance. I also opened a draft PR to add both the cash movement and register open sequence CDC streams to the reporting pipeline, putting the project ahead of schedule.
In a thread with Mat H., it turns out a third-party integrator actually needs cash movement data in the register_closure event webhooks rather than a public API. We need to consider whether to add this to the current sprint.
Investigations
I will get to the offline mode cost analysis after finishing the foundational work for the Cash Movement Report.