Sunday, January 12, 2020

QUOSPHERE


Quosphere is a global Digital Technology Solutions company for Fortune 1000 and SMB organizations. We specialize in transformative strategic consulting and offer domain centric solutions in the areas of data management, business intelligence, visualization, predictive & prescriptive analytics and artificial intelligence. Acclaimed for our cutting-edge technical implementation of data science, data architecture and data engineering, we pride ourselves on being able to solve our client’s toughest data challenges.



Required: Final year 2020 batch B.Sc Mathematics, Statistics, Economics, Comp Sc
                              M.Sc Comp Sc


JUNIOR DATA SCIENTIST - POSITION DETAILS
1. Junior Data Scientist
2. The position will be based out of Navi Mumbai, India
Here are some of the things we are looking for in a potential candidate:
1. Should be from a quantitative background such as Statistics, Mathematics, Computer Science, or
Engineering.
2. Must know Python.
3. Must know data preprocessing and cleansing techniques.
4. Familiarity with these packages in Python: sklearn, tensorflow, keras, statsmodels.
5. Must know Machine Learning algorithms such as decision trees, random forests, boosting and bagging
techniques, neural networks, support vector machines, naive bayes, to name a few. The fundamentals
of these algorithms must be clear to the candidate.
6. Should be able to apply statistical techniques such as: inferential statistics, estimation (parametric
and non-parametric), ANOVA, time series analysis, probabilistic modelling. The candidate should also
be able to interpret the results of the analyses.
7. Familiarity with building APIs in Python is desirable but not necessary.
8. Should know how to represent results of analyses visually. Familiarity with a data visualization tool is
desirable but not necessary.
9. Should have a passion for data and must love coding.
10. Should be eager to learn and should be willing to take on challenges.
11. Must have good communication skills and must be a team player.
Job Responsibilities:
1. The candidate will need to understand the client's business requirements and need to implement
appropriate machine learning/statistical models accordingly.
2. Needs to be able to present his/her findings in a manner easily interpretable by the end-user.
3. The candidate will need to fetch data from multiple sources and preprocess the data in a form suitable
for the end analysis.
4. The candidate should be able to code in Python and use his creativity and logic to express the problem
at hand through code.
5. Will need to co-ordinate with the data management team to understand the underlying business data
flow and logic.
6. Will need to ask relevant questions about the data at hand.
7. Will need to work with the product team on machine learning problems.
8. The candidate should have excellent written communication skills for documenting his/her work in a lucid and cogent manner.


CTC: Rs. 5/- LPA



Data Engineer
Job Overview
We are looking for savvy Data Engineers to join our fast-growing team of analytics experts. You will
be responsible for expanding and optimizing our data and data pipeline architecture, as well as
optimizing data flow and collection for cross functional teams.
The ideal candidate should be able to work with data and can work with data pipelines or wrangle,
one who enjoys optimizing data systems and building them from the ground up. The Data Engineer
will architect, analyze on data initiatives and will ensure optimal data delivery architecture is
consistent throughout ongoing projects to support our data scientists and software developers.
They must be self-directed and comfortable supporting the data needs of multiple teams, systems
and products. The right candidate will be excited by the prospect of optimizing or even re-designing
our client’s data architectures to support our next generation of products and data initiatives.
Responsibilities for Data Engineer
• Create and maintain optimal data pipeline architecture, data warehouses, OLAP and analytics
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Build the infrastructure required for optimal extraction, transformation, and loading of data from
a wide variety of data sources using SQL and AWS/Hadoop/Azure ‘big data’ technologies.
• Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
• Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
• Keep client data separated and secure across national boundaries through multiple data centers
and regions. Need a keen eye for data security
• Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
• Work with data and analytics experts to strive for greater functionality in our data systems.

Qualifications for Data Engineer
• Advanced SQL knowledge
• Strong analytic skills related to working with unstructured/structured datasets.
• Strong project management and organizational skills.
• We are looking for a candidate who wants to build a career in Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or
another quantitative field.
• Basic project level experience using the following software/tools would be preferred:
✓ Experience with big data tools: Hadoop, Spark, Kafka, etc.
✓ Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
✓ Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
✓ Experience with AWS cloud services: EC2, EMR, RDS, Redshift
✓ Experience with stream-processing systems: Storm, Spark-Streaming, etc.
✓ Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala,
etc.

Basic Eligibility Criteria: 60% or above marks in class 10th, class 12th and graduation

CTC: Rs.3.5/- 4 LPA  subject to yearly appraisals


Nature of Position:
- In the first two months, the candidate will undergo functional training and will be paid monthly
stipend of INR 10,000
- Further employment with us is contingent on clearing the monthly and final evaluation
- Specialized Training Agreement for a period of 2 years.

Campus: February 14, 2020


Interested students are requested to register their names through online (www.sxccal.edu- Placement Cell - Registration for Campus Drive).

Last date: January 17, 2020


*** Without Placement Hall Ticket you are not allowed to sit for the drive.


No comments: