Data Engineer job at MNC Companis || Job in Bangalore

 Data Engineer job at MNC Companis :


1. HP Bengaluru, Karnataka, India : 

Project Role : Data Engineering

Job Description
  • Meeting with managers to determine the company’s Big Data needs.
  • Developing Hadoop systems.
  • Loading disparate data sets and conducting pre-processing services using Spark, Hive or Pig.
  • Finalizing the scope of the system and delivering Big Data solutions.
  • Managing the communications between the internal system and the vendor.
  • Collaborating with the software research and development teams.
  • Building cloud platforms for the development of company applications.
  • Maintaining production systems.
  • Training staff on data management.
Big Data Engineer Requirements
  • Bachelor’s degree in computer engineering or computer science.
  • Previous experience as a big data engineer.
  • In-depth knowledge of Hadoop, Spark, and similar frameworks.
  • Knowledge of scripting languages is preferred .
  • Knowledge of NoSQL and RDBMS databases including Redis and MongoDB.
  • Familiarity with Mesos, AWS, and Docker tools.
  • Excellent project management skills.
  • Good communication skills.
  • Ability to solve complex data, and software issues.
Click Here to Apply :  Apply Now

2. IBM Bengaluru, Karnataka, India :

Project Role : Data Engineering

Job Description :

609996BR


Introduction


A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.


You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.


Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.


Your Role and Responsibilities


  • In this role, you'll work in our IBM Client Innovation Center (CIC), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. These centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology

As Big Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in data engineering activities like Creating pipelines / workflows for Source to Target etc.


Your Primary Responsibilities Include


  • Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
  • Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
  • Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.
  • You will be involved in the design of data solutions using Hadoop based technologies along with Hadoop, Azure, HDInsights for Cloudera based Data Late using Scala Programming.
  • Responsible to Ingest data from files, streams and databases. Process the data with Hadoop, Scala, SQL Database, Spark, ML, IoT
  • Develop programs in Scala and Python as part of data cleaning and processing
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems
  • Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various use cases built on the platform
  • Provide high operational excellence guaranteeing high availability and platform stability
  • Implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Pyspark, Kafka, any Cloud computing etc.
  • If you thrive in a dynamic, collaborative workplace, IBM provides an environment where you will be challenged and inspired every single day. And if you relish the freedom to bring creative, thoughtful solutions to the table, there's no limit to what you can accomplish here.


Required Technical and Professional Expertise


  • Minimum 7+ years of experience in Big Data technologies and In-depth experience in modern data platform components such as the Hadoop, Hive, Pig, Spark, Python, Scala, etc
  • Proficient in any of the programming languages – Python, Scala or Java
  • Experience with Distributed Versioning Control environments such as GIT
  • Demonstrated experience in modern API platform design including how modern UI’s are built consuming services / APIs.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission


Preferred Technical And Professional Expertise


  • Familiarity with development tools - experience on either IntelliJ / Eclipse / VSCode IDE, Build Tool Maven
  • Experience on Azure cloud including Data Factory, Databricks, Data Lake Storage is highly preferred.
Click here to Apply : Apply Now

3. Uplers Bengaluru, Karnataka, India :

Profile: Data Engineer

Experience: 3+ Years

Location: Permanent Remote


What is Uplers Talent Network?

Uplers Talent Network is a place where top talents meet the right opportunities. It is a platform for every candidate looking for a perfect opportunity to work with global companies on a contractual basis. Our talent network is a place for top Indian talents who can benefit from the platform and gain access to global career exposure.


With us, you'll get the support, guidance, and opportunities that you need to take your career to the next level. So, if you're ready to embark on the journey of your next challenge, we're ready to be your engine!


Contractual Position

A contractual position usually requires you to sign and agree to the terms of a contract before you begin working. This structure can offer a variety of commitments that allow you to refine established skills and create new ones.


Uplers Talent Network brings contractual positions with benefits like:

✔ Higher pay than industry standards

✔ Full-time position

✔ Ability to gain different skills in a short period

✔ Control over your career


Perks of joining Uplers Talent Network:

  • Talent Success Coach: Get connected with a dedicated coach to guide you before, during as well as after your assignments with our clients.
  • Payout: Get paid in global currencies and earn more than industry standards.
  • Opportunity: Work with international companies and get global exposure with exciting projects.
  • Mobility: Work from the comfort of your living room couch or even a breezy beach.


How to become a part of our Talent Network?

  1. Take the first step, register on our portal
  2. Clear the decks and fill out the application form
  3. Gear up, clear the 3-stage assessment process
  4. And yes! Become a part of Uplers Certified Talent Network


Requirement:

  • 3+ years of experience in a Data Engineer role
  • Experience with big data tools: Hadoop, Spark, Kafka, etc
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc
  • Experience with object-oriented/object function scripting languages: Python, R, Java, C++, Scala, GoLang, etc
  • Extensive knowledge of data engineering tools, technologies, and approaches
  • Hands-on experience with ETL tools
  • Knowledge of the design and operation of robust distributed systems
  • BI tools knowledge
  • Experience in Warehouse: SQL/NoSQL, Amazon Redshift, Panoply, Oracle, Talend, Informatica, Apache Hive, etc
  • Bachelor's degree in data engineering, big data analytics, computer engineering, or a related field
  • Knowledge of multiple data technologies and concepts
Click here to Apply : Apply Now

4. JPMorgan Chase & Co.  Mumbai, Maharashtra, India :
Organization Description


Our Software Engineering Group supports several lines of business within bank supporting ESG initiatives (Do good For good). Providing finance solutions to support ESG initiatives which are part of $2.5 trillion business insights across multiple commitments of environment, social , diversity & inclusion financial analytics, regulatory reporting and forecasting. We are a global team with colleagues across several locations in US and India.


Our group is heavily focused on data processing utilizing several different technology stacks and continually seek to improve our technology environment as part of our ongoing modernization journey. Our modernization plans include executing various on-prem version upgrades, automating manual legacy processes, and migrating to new Data Centers and the public cloud (AWS). Whether you are passionate about Web App (JavaScript, ReactJS, Spring boot), Data Integration and analysis (Java, python, spark ), traditional relational databases (Oracle, SQL Server, AWS - RDS) or any of the related toolsets or stacks, you will have the opportunity to also learn about the others.


We are looking for someone who is very strong in at least one of the tech stacks, can help with automation efforts, or has AWS experience, and is willing to learn through our modernization journey together. Candidates that are always curious about technology will find our team primed with lots of opportunities for learning, building out skill sets, and continued growth as a technologist.


Key Responsibilities


  • Setup and leverage models & entities in data lake environment
  • Build data pipelines on data lake platform that includes sourcing information, writing logic for ingestion, enrichment and business reporting
  • Integrate data pipeline with analytical and reporting tools leveraging At-scale/Dremio and Tableau
  • Participate in design discussion and code reviews
  • User engagement to understand business requirements and for any issues reported


Required Skills


  • Advanced knowledge of SQL and query optimization concepts (TSQL and/or PL/SQL)
  • Well versed with Data analysis and Data modeling
  • Advanced knowledge of application, data, and infrastructure architecture disciplines
  • Unix shell scripting and/or Windows scripting (PowerShell, Perl, Batch scripts)
  • Strong experience with relational enterprise databases (Oracle and/or SQL Server)
  • Applying Process automation design principles and patterns
  • Creating/maintaining ETL processes
  • Good understanding of Change management process (ServiceNow)
  • Knowledge of industry-wide technology trends and best practices
  • Passionate about building an innovative culture
  • Ability to work in large, collaborative teams to achieve organizational goals
  • BS. or MS. in computer science, information systems, math, business, or engineering


Preferred Skills


  • Proficiency in one or more modern programming language (Java, Python, Spark)
  • Experience in migrating data workflows on-premises to public cloud
  • AWS knowledge/certification (huge plus)
  • Scheduling tools like Autosys or Control-M
  • Familiarity with API development experience
  • BI Analytical experience (Tableau, Alteryx)
Click here to Apply :  Apply Now

5.Bread Financia Bengaluru, Karnataka, India :

Job Description


Essential Job Functions


Collaboration - Collaborates with internal/external stakeholders to manage data logistics – including data specifications, transfers, structures, and rules. Collaborates with business users, business analysts and technical architects in transforming business requirements into analytical workbenches, tools and dashboards reflecting usability best practices and current design trends. Demonstrates analytical, interpersonal and professional communication skills. Learns quickly and works effectively individually and as part of a team.


Process Improvement - Access, extract, and transform Credit and Retail data from a variety of sources of all sizes (including client marketing databases, 2nd and 3rd party data) using Hadoop, Spark, SQL, Big data technologies etc. Provides automation help to analytical teams around data centric needs using orchestration tools, SQL and possibly other big data/cloud solutions for efficiency improvement.


Project Support- Support Sr. Specialist and Specialist in new analytical proof of concepts and tool exploration projects. Effectively manage time and resources in order to deliver on time/correctly on concurrent projects. Involved in creating POCs to ingest and process streaming data using Spark and HDFS.


Data and Analytics - Answer and trouble shoot questions about data sets and analytical tools; Develop, maintain and enhance new and existing analytics tools to support internal customers. Ingest data from files, streams and databases then process the data with Python and Pyspark in order to store data to Hive or NoSQL database. Manage data coming from different sources and involved in HDFS maintenance and loading of structured and unstructured data. Apply knowledge in Agile Scrum methodology that leverages the Client Bigdata platform and used version control tool Git. Import and export data using Sqoop from HDFS to RDBMS and vice-versa. Demonstrate an understanding of Hadoop Architecture and underlying Hadoop framework including Storage Management. Create POCs to ingest and process streaming data using Spark and HDFS. Work on back-end using Scala, Python and Spark to perform several aggregation logics


Technical Skills - Expert in writing complicated SQL Queries and database analysis for good performance. Experience in working on Microsoft Azure Services like ADLS/Blob Storage solutions, Azure DataFactory, Azure Functions and Databricks. Utilize basic knowledge of Rest API for designing networked applications


Reports To


Working Conditions/ Physical Requirements: Normal office environment


Direct Reports: 0


Minimum Qualifications


Bachelor’s Degree in Computer Science or Engineering,


0 to 3 years in Data & Analytics


Education Requirements


Bachelor’s Degree (Required)


Work Experience


Certifications:


None Required


Skills :


Apache Spark, Azure Blob Storage, Azure Data Factory, Azure Data Lake, Big Data, Cloud Environment, Cloudera Data Platform (CDP), Cloudera Hadoop, Cloudera Impala, Hadoop, HBase, HDFS, Hive, Linux Shell Scripting, NoSQL, Python, RESTful APIs, R Programming, Scala, Structured Query Language (SQL)

Click here to Apply : Apply Now 

Post a Comment

0 Comments