Job Description:
As a Senior Software Engineer in the Data Fabric pod, you will play a crucial role in driving the development of innovative data solutions at IDfy.
You would be working on the Data Fabric platform which provides Data as a service (DaaS) and Insight as a service (IaaS) for people at IDfy and customers of IDfy.
You will execute the design, development, and implementation of software components that power our data platform. Your technical expertise will be instrumental in ensuring that our data solutions meet the highest quality, efficiency, and scalability standards. The Data Fabric platform is the backbone for our Insights, Analytics Invoicing needs at IDfy. It serves a whopping 150TB reads/month, and close to 700M rows are ingested via the platform.
If you think of yourself as a Data Wizard with a deep understanding of data engineering technologies.
3 - 5 years
B.Tech/B.E
Here s What Your Day Will Look Like
Designing implementing Data Pipelines Frameworks to provide a better developer experience for our dev teams.
Helping other PODs in IDfy define their data landscape and onboarding them onto our platform.
Keep abreast of the latest trends and technologies in Data Engineering, GenAI, and Natural Language Query.
Set up logging, monitoring, and alerting mechanisms for better visibility into data pipelines and platform health.
Automate repetitive data tasks to improve efficiency and free up engineering bandwidth.
Maintain technical documentation to ensure knowledge sharing and onboarding efficiency.
Troubleshoot and resolve bottlenecks in data processing, ingestion, and transformation pipelines.
We build products that detect and prevent fraud. At IDfy, you will apply your skills to stay one-step ahead of fraudsters. You will be mind-mapping fraudsters modus operandi, predicting the evolution of fraud techniques, and designing solutions to prevent new emerging fraud.
At IDfy, you will work on the entire end-to-end solution rather than a small cog of a giant wheel.
Thanks to our problem-centric approach, one in which we find the right technology to solve a problem rather than the other way around, you will always be working on the latest technologies.
We work hard and party hard. There are weekly sessions on emerging technologies. Work weeks are usually capped off with board games, poker, karaoke, and other fun activities.
Data Engineering (Check JD)
We Are the Perfect Match If You
Have experience creating and managing large-scale data ingestion pipelines using the ELT (Extract, Load, Transform) model.
In your current role, take ownership of defining data models, transformation logic, and data flow.
Are proficient in Logstash, Apache BEAM Dataflow, Apache Airflow, ClickHouse, Grafana, InfluxDB/VictoriaMetrics, and BigQuery.
Strong understanding and hands-on experience with data warehouses, with at least 3 years of experience in any data warehousing stack.
Have a keen eye for data and can derive meaningful insights from it.
Understand product development methodologies; we follow Agile.
Have experience with Time Series Databases (we use InfluxDB VictoriaMetrics) and alerting/anomaly detection frameworks (preferred but not mandatory).
Are familiar with visualization tools such as Metabase, Power BI, or Tableau.
Have experience developing software in the cloud (GCP/AWS is preferred, but hands-on experience is not mandatory).
Are passionate about exploring new technologies and enjoy sharing your knowledge through technical blogs.
Keyskills: Software development GCP Agile Data processing power bi Apache Data warehousing Analytics Monitoring Technical documentation