Job Description
Working on a variety of projects that require data ingestion, transformation, and loading into a data warehouse using
Python, PySpark, Spark
Build and maintain robust data pipelines that ingest TBs of data
Create logical and physical data models for big data to be stored in cloud data warehouses (such as AWS Redshift,
Snowflake, etc.)
Optimize Spark and SQL queries
Create new SQL tables and dashboards
Work with data analysts on implementing dashboards and performing analysis of large sets of alternative data to help the
investment management team answer questions around their investment thesis
Conduct statistical tests (hypothesis testing), correlation analysis, time series analysis, abnormality detection, etc.
Write queries to extract insights from large datasets
Job Classification
Industry: KPO, Research, Analytics
Functional Area: Analytics & Business Intelligence,
Role Category: Analytics & BI
Role: Analytics & BI
Employement Type: Full time
Education
Under Graduation: B.Tech/B.E. in Any Specialization
Post Graduation: Any Postgraduate in Any Specialization, Post Graduation Not Required
Doctorate: Doctorate Not Required, Any Doctorate in Any Specialization
Contact Details:
Company: hCapital Business
Location(s): Pune
Keyskills:
time series analysis
snowflake
python
sql queries
spark
pyspark
data warehousing
big data
aws