3 Steps to Creating Any Data Project
How to Gather Requirements
This is the first step to create and build your data pipeline in any company, relax, believe in this process that I present to you right here, right now, and if you have a huge ego, try to control it, or leave outside your company, because your dealing here with end users, and end users are you customers.
Who are your end users?
You need to understand who is going to be your end users. You prepare the data, and your end users cook the data. This is like you are the farmer and your end users are chefs, or bakers. So to make this job easy for you, let me introduce you to some of the end users that you will find in any company.
- Data analysts used SQL and files.
- Data scientists used SQL and files.
- Software engineers use SQL and APIs.
- Business Analysts used Report, Dashboards, Excel files.
- Project managers used dashboards.
- External users used S3 objects, SFTP/FTPS, and APIs.
How to Help End Users Define Requirements?
You don’t get all your requirements on the first day. You need to have patience. You have to empathise with your end-users’ pain. You should believe in the process. I can’t do it for you, but I can help you to achieve that by asking these magic questions:
- How will this data improve the business? For example, the churn rate problem.
- What does the data represent?
- What is the business process used to collect this data?
- What is the origin of this data ? (S3 files, SFTP, APIs, external database, internal database, manual upload of files)
- What is the regularity of the data?
- Do you want data in seconds, minutes, hours, days, weeks, or months?
- Do you need to store historical data?
- What is the seasonality of the data?
- Does the data have a skew in size?
- How do the end users access the data through one of these options (SQL, Dashboards, APIs)?
- What are the data quality metrics of the company?
- How do you know the data passed the business logic-based checks?
- What are your numeric field checks?
- Do you follow a naming convention for files, schemas, tables, columns, or API fields?
- What is the standard size of the files in the company?
What is the End User Validation Process?
- Provide samples of the data to the end users.
- Take their feedback seriously regarding the validation of the data because they understand the data better than you.
- Observe their access pattern, which means what schemas, tables, and columns are used mostly inside a database, or what filters they apply if this is a dashboard.
- Write any new requirement as a ticket in Jira, and don’t start working on a new transformation layer until you have the green light from the end users, which means the sign-off of the ticket.
- Forget about your ego when you work with end users.
What is your delivery process?
- Deliver the job in small chunks, in small pieces, bit by bit.
- Put the end users in the loop, which means asking them to review your tickets, because you can easily spot any new requirements.
- Record your job using Jira tickets. These Jira tickets need to have clear acceptance criteria.
- Document every big change in Confluence pages.
Let me give you here a clear example, imagine you need to ingest a huge number of source data in your data lake, then you need to transform them and put them in your data warehouse, so to do that you need to follow this process:
- Modeling the data from only one source.
- Pull the data from the source that you chose in the first step. For example, that data source is the Google Ads API.
- Put the data in the data lake.
- Apply a simple transformation of the data, and put it in your data warehouse.
- Build the dashboard based on the data that you have in the data warehouse.
How to Create a Process for Managing Change Requests?
- Don’t accept adhoc requests (unexpected requests).
- Educate the end user in the process of the requested change.
- Allow end users to request changes in an easy way.
- Communicate delivery times to the end users.
- If some requests are more important than others, work with the stakeholders to decide which one has top priority.
How to Add Testing to Your Project?
What is system testing?
This type of testing can only be done in the development environment.
- Take any data sample and pass it through the data pipeline.
- Generate the output from this data sample.
- Compare the output from the second step with the expected output.
What is Data Quality Testing?
You can only implement this data quality testing as a step in the production environment. You can follow these steps to make it happen:
- DBT and Great Expectations will be your weapons in this test.
- You must load the data into a staging table, and after that, start applying constraint checks, business logic-based checks, and outlier checks.
- If there is a problem, something that didn’t pass the checks, will trigger an alarm.
What are Monitoring and Alerting Systems?
- You use monitoring to catch changes in our data pipeline.
- Create logs from your applications, or if you are working in the AWS cloud, create them using AWS cloudwatch.
- You can send the logs to DataDog, or you can send them to table in the database.
- Program an alarm using DataDog, or through your database, if you detect something bad happening with your pipeline.
What is Your Offboarding Process?
- Create video tutorials of all the big changes in the project.
- Create Confluence pages for any new data pipeline project that you implement.
- In the last weeks of your project in the company, support your colleagues in case they have problems following your tutorials.
Alternatively, you can get a Medium subscription for $5/month. If you use this link, it will support me.
Reference
This article is possible because of these references. some external links in this post are affiliate.