1. Minimum 6 years IT experience
2. Minimum 4 years of experience in Java/Spark technologies
3. Should have good experience in understanding requirement related to extraction,
transformation and loading (ETL) of data using Java/Spark on Hadoop platform.
4. Should have strong SQL skills; Teradata is preference but experience in any other RDBMS
technology would suffice
5. Should have basic knowledge about Datawarehousing Concepts
6. Should have worked on an agile based project methodology
7. Should be able to independently design, build, test and deploy the code with minimum
guidance
Knowledge or experience on ETL technologies like Informatica or Ab-initio would be preferable.
Bachelors
Any Bachelors Degree
,
IT-Software- Software services