Thingularity is an engineering services & technology solutions company with core expertise in the Internet of Things, Industry 4.0, Embedded Systems, Cloud, and AI & ML.

Contacts

138,7th Cross,29th Main road, BTM 2nd Stage Bangalore 560076

info@thingularitynow.com

+1 (256) 474-4326

Job Location: Bangalore 
Experience Level: 6+ Years
Education Qualifications: Bachelors / Masters in Electronics/Computer Science/ MCA 

Thingularity is looking for a Sr Cloud Data Engineer with 6-8 years of development experience in implementation of IoT based Data applications on AWS.  

Key Skillsets

1. Educational Background: Strong foundation in Computer Science or Software Engineering
2. 6+ years of Hands on Experience in developing and building data pipelines on Cloud & Hybrid
infrastructure for analytical needs.
3. Experience working with cloud based data warehouse solutions – along with expertise in SQL and Advance
SQL.
4. High expertise in modern cloud warehouse, data lakes and implementation experience on any of the cloud
platforms – preferably AWS. Amazon S3.
5. Must be proficient in Python or Scala. Must know Pyspark.
6. Must have expertise in Database Management: relational databases (e.g., PostgreSQL, MySQL) and NoSQL
databases (e.g., MongoDB, Cassandra).
7. Must have experience in Big Data Technologies – Apache Spark, Delta Table & Datalake Architechture
8. Desirable to have experience in Databricks.
9. Desirable to have experience in ETL automation tools like Apache Airflow.
10. Desirable to have experience in Apache Flink.
11. Desirable to have experience in BI tools knowledge (Tableau, Microsoft Power BI).
12. Desirable to have experience in ML frameworks and libraries like Tensorflow, Pytorch or XGboost.
13. Exposure working in Data Science projects will be a plus
14. Worked on various code management and configuration tools like Jira, GitHub, Confluence, Bitbucket

Job Description

1. As a Cloud Data Engineer you will design, develop and implement modern cloud based data warehouse/ data lakes and influence overall data strategy for the organization.
2. Translate complex business requirements into scalable technical solutions meeting data warehousing/analytics design standards (Architect).
3. Develop and optimize data processing jobs and analytics applications that can handle the volume, velocity,
and variety of big data (PetaBytes).
4. Perform ETL operations to transform raw data into a structured format.
5. Design and construct robust data pipelines. Develop custom data models and algorithms.
6. Optimize data storage and compute for cost and performance.
7. Implement data cleaning and validation processes to enhance data accuracy and consistency.
8. Mitigate algorithmic biases by ensuring that data pipelines are designed with fairness and transparency.
9. Collaborate with data scientists and analysts to support analytics initiatives
10. Bridge the gap between data engineering and machine learning. Prepare and manage datasets for training
machine learning models and integrating models into production environments.
 

Do you fit the Criteria? Submit your details and resume in the form below.