A Whole World Full of Opportunities Waiting Just For You

See The Open Positions Below

job hunt, work, career opportunities, software development company, data analytics, business analytics, hiring, open for work

1. Business Analyst - Data Science - Chandigarh Branch

Job Description

3+ years of work experience in the following:

  • Proficiency in SQL
  • Handling and analyzing large data
  • Strong experience in Tableau
  • Data cleaning and processing
  • ETL tools like Alteryx
  • Python (Basic data manipulation using pandas, matplotlib, sklearn)
  • Excellent communication skills
  • The collaborative mindset to work with clients and internal teams
  • Motivation to learn constantly

You will: Be part of a team that solves business problems which involve:

  • Brainstorming with clients and internal teams to define a problem
  • Translating the business problem into an analytical problem
  • Solving the analytical problem using a combination of Technology, Math, and Domain knowledge.

Good experience of:

  • Basic operations: Joins, Union, table properties
  • Windows function: rank, partition by
  • Parameterizing query
  • Optimized table format
  • Query optimization parameters
  • My-SQL query optimization
  • Collecting, processing and performing statistical data analysis
  • Intuitive data mindset with high “figure-it-out” quotient

Required Skills & Experience:

  • Experience Working with Spreadsheet tools (Tableau, MS Excel), Database systems (SQL/Hive), Data cleaning, PowerBI/Alteryx/Qlikview, Python/R
Submit your resume

2. MERN Stack Developer

Job Description

4-5 years of work experience in the following:

Job Summary: 

We are seeking an experienced MERN Stack Developer with 4-5 years of hands-on experience to join our dynamic development team. The ideal candidate will be proficient in building scalable, efficient, and robust web applications using the MERN (MongoDB, Express.js, React.js, Node.js) stack. You will be responsible for full-stack development, collaborating with cross-functional teams to deliver innovative solutions.

Key Responsibilities:

  • Knowledge of collections and documents, understanding data modeling.
  • Understanding the importance of indexes for performance.
  • Ability to create basic routes and handle requests (GET, POST, PUT, DELETE).
  • Understanding of middleware functions in Express (e.g., for authentication, logging, validation).
  • Managing component state and passing data via props.
  • Understanding React component lifecycle methods/hooks (e.g., useEffect, useState).
  • Using libraries like react-router for client-side routing.
  • Advance understanding of state management tools like Context API or Redux.
  • Understanding how JavaScript works in Node (modules, callbacks, promises).
  • Familiarity with Git and GitHub for managing code.
  • Writing unit tests (e.g., Jest for React, Mocha for Node/Express).
  • Debugging with Chrome Developer Tools, React Dev Tools, and using linters like ESLint.
  • Understanding Docker, CI/CD pipelines, and cloud services (e.g., AWS, Digital Ocean).

Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • 4-5 years of experience working as a full-stack developer with the MERN stack.
  • Strong proficiency in MongoDB, Express.js, React.js, and Node.js.
  • Experience with front-end technologies such as HTML5, CSS3, and JavaScript (ES6+).
  • Familiarity with version control tools like Git.
  • Understanding of RESTful APIs and web services.
  • Experience with cloud services (e.g., AWS, Azure) is a plus.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration abilities.

Good to Have:

  • Experience with serverless architectures and microservices.
  • Knowledge of DevOps practices, CI/CD pipelines.
  • Familiarity with TypeScript.
Submit your resume

3. Data Engineer (Snowflake DBT, Airflow)

Job Description

2-3 years of work experience in the following:

  • Architect, Design, document & implement the data pipelines to feed data models for subsequent consumption in Snowflake using dbt, airflow.
  • Ensure correctness and completeness of the data being transformed via engineering pipelines for end consumption in Analytical Dashboards.
  • Actively monitor and triage technical challenges in critical situations that require immediate resolution.
  • Evaluate viable technical solutions and share MVPs or PoCs in support of the research
  • Develop relationships with external stakeholders to maintain awareness of data and security issues and trends
  • Review work from other tech team members and provide feedback for growth
  • Implement Data Performance, Data security policies that align with governance objectives and regulatory requirements

You will: Be part of a team that solves business problems which involve:

  • You have experience in data warehousing, data modelling, and the building of data engineering pipelines. You are well versed in data engineering methods, such as ETL and ELT techniques through scripting and/or tooling.
  • You are good in analyzing performance bottlenecks and providing enhancement recommendations; you have a passion for customer service and a desire to learn and grow as a professional and a technologist.
  • Strong analytical skills related to working with unstructured datasets.
  • Collaborating with product owners to identify requirements, define desired outcomes and deliver trusted results.
  • Building processes supporting data transformation, data structures, metadata, dependency and workload management.
  • In this role, SQL is heavily focused.

An ideal candidate must have hands-on experience with SQL database design. Plus, Python.

  • Demonstrably deep understanding of SQL (level: advanced) and analytical data warehouses (Snowflake preferred).
  • Demonstrated ability to write new code i.e., well-documented and stored in a version control system (we use GitHub & Bitbucket)
  • Extremely talented in applying SCD, CDC and DQ/DV framework.
  • Familiar with JIRA & Confluence.
  • Must have exposure to technologies such as dbt, Apache airflow and Snowflake.
  • Desire to continually keep up with advancements in data engineering practices.
  • Knowledge on AWS cloud, Python is a plus.

Good experience of:

  •  4+ years of IT experience with major focus on data warehouse/database related projects
  • Must have exposure to technologies such as dbt, Apache Airflow, Snowflake.
  • Experience in data platforms: Snowflake, Oracle, SQL Server, MDM etc
  • Expertise in writing SQL and database objects – Stored procedures, functions, views. Hands on experience in ETL/ELT and data security, SQL performance optimization and job orchestration tools and technologies e.g., dbt, Attunity, Golden Gate, APIs, Apache Airflow etc.
  •  Experience in data modelling and relational database design
  • Well versed in applying SCD, CDC and DQ/DV framework
  • Demonstrate ability to write new code i.e., well-documented and stored in a version control system (we use GitHub & Bitbucket)
  • Good to have experience with Cloud Platforms such as AWS, Azure, GCP and Snowflake
  • Good to have strong programming/ scripting skills (Python, PowerShell, etc.)
  • Good to have knowledge around developing financial models and forecasting to support financial planning, and decision-making processes.
  • Experience around responsibility for analyzing and interpreting financial data to provide valuable insights and support strategic decision-making.
  • Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs)
  • Excellent written, oral communication and presentation skills to present architecture, features, and solution recommendations

You’ll Work With

  • Global functional product portfolio technical leaders (Finance, HR, Marketing, Legal, Risk, IT), product owners, functional area teams across levels
  • Global Data Product Portfolio Management & teams (Enterprise Data Model, Data Catalog, Master Data Management)
  • Consulting and internal Data Product Portfolio teams

Essential Education

  • Bachelor’s degree or equivalent combination of education and experience.
  • Bachelor’s degree in information science, data management, computer science or related field preferred.
Submit Your Resume

4. Data Architect (Data Modelling, Snowflake, DBT, Airflow)

Job Description

4-5 years of work experience in the following:

  • Define and design future state data architecture for financial budgeting, reporting, forecasting and analysis products.
  • Partner with Technology, Data Stewards and various Products teams in an Agile work stream while meeting program goals and deadlines.
  • Create financial business cases / user personas to support various business initiatives.
  • Engage with line of business, operations, and project partners to gather process improvements.
  • Lead to design / build new models to efficiently deliver the financial results to senior management.
  • Evaluate Data related tools and technologies and recommend appropriate implementation patterns and standard methodologies to ensure our Data ecosystem is always modern.
  • Collaborate with Enterprise Data Architects in establishing and adhering to enterprise standards while also performing POCs to ensure those standards are implemented.
  • Provide technical expertise and mentorship to Data Engineers and Data Analysts in the Data Architecture.
  • Develop and maintain processes, standards, policies, guidelines, and governance to ensure that a consistent framework and set of standards is applied across the company.
  • Create and maintain conceptual / logical data models to identify key business entities and visual relationships.
  • Work with business and IT teams to understand data requirements.
  • Maintain a data dictionary consisting of table and column definitions.
  • Review data models with both technical and business audiences.

You’re Good At:

  • Design, document & train the team on the overall processes and process flows for the Data architecture.
  • Resolve technical challenges in critical situations that require immediate resolution.
  • Develop relationships with external stakeholders to maintain awareness of data and security issues and trends.
  • Review work from other tech team members and provide feedback for growth.
  • Implement Data security policies that align with governance objectives and regulatory requirements.

YOU BRING (EXPERIENCE & QUALIFICATIONS) Essential Education:

• Bachelor’s degree or equivalent combination of education and experience.
• Bachelor’s degree in information science, data management, computer science or related field preferred.

Good experience of:

  • 12+ years of IT experience with major focus on data warehouse/database related projects
  • Expertise in cloud databases like Snowflake/RedShift, data catalogue, MDM etc
  • Expertise in writing SQL and database procedures
  • Proficient in Data Modelling… Conceptual, logical, and Physical modelling
  • Proficient in documenting all the architecture related work performed.
  • Hand on experience in data storage, ETL/ELT and data analytics tools and technologies e.g., Talend, DBT, Attunity, Golden Gate, FiveTran, APIs, Tableau, Power BI, Alteryx etc
  • Experienced in Data Warehousing design/development and BI/ Analytical systems
  • Experience working projects using Agile methodologies
  • Strong hands-on experience with data and analytics data architecture, solution design, and engineering experience
  • Experience with Cloud Big Data technologies such as AWS, Azure, GCP and Snowflake
  • Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs)
  • Review existing databases, data architecture, data models across multiple systems and propose architecture enhancements for cross compatibility and target systems
  • Excellent written, oral communication and presentation skills to present architecture, features, and solution recommendations

You’ll Work With

  • Global functional tribe technical leaders (Finance, HR, Marketing, Legal, Risk, IT), product owners, functional area teams across levels.
  • Global Data Tribe Management & teams (Enterprise Data Model, Data Catalog, Master Data Management).
  • Consulting and internal Data Tribe teams.
Submit Your Resume

5. Product Owner

Job Description

4-5 years of work experience in the following:

  • Vision and Strategy:  Define and communicate the product vision and strategy for Pulse survey automation and
    customer insights products. Ensure alignment with the organization’s overall business goals.
  • Backlog Management: Create, maintain, and prioritize the product backlogs for both projects. Articulate user
    stories and acceptance criteria. Ensure transparency and clarity of the backlogs to all stakeholders.
  • Stakeholder Engagement: Collaborate with stakeholders to gather requirements and feedback for both projects.
    Manage stakeholder expectations and ensure their needs are addressed.
  • Development Collaboration: Work closely with the development teams to clearly understand product
    requirements for both projects. Participate in sprint planning, reviews, and retrospectives. Make decisions
    regarding priority, scope, and acceptance of work results.
  • Market and User Research: Conduct market and user research to identify both projects; customer needs
    and market opportunities. Stay updated on industry trends, competitor products, and emerging technologies.
  • Product Roadmap: Develop and maintain product roadmaps outlining the development trajectories for both
    projects. Communicate the roadmaps to the teams and stakeholders.
  • Release Management: Plan and manage product releases for both projects, ensuring timely delivery of high-
    quality features. Coordinate with marketing, sales, and support teams for product launches.
  • Performance Measurement: Define and track key performance indicators (KPIs) to measure the success of both
    products. Analyse product performance and make data-driven decisions to drive improvements.
  • Customer Insights: Utilize customer data and insights to drive product decisions for both projects. Ensure the
    products meet customer needs and enhance user experience.
  • Compliance and Risk Management: Ensure the products comply with relevant regulations and standards. Identify
    and manage risks related to product development and deployment.

Key Skills and Qualifications Education:

Bachelor’s degree in Computer Science, Information Systems, Business, or related field.

Experience:

  • Proven experience as a Product Owner or similar role in technology-driven environments.
  • Experience in technical product development, including technologies like AWS, Alteryx, Tableau, DBT,
    Airflow, Glue, Python, Survey tools, and databases.
  • Technical Proficiency: Strong understanding of software development processes and methodologies (e.g., Agile, Scrum).
  • Proficiency in Python and associated data science libraries and frameworks.
  • Knowledge in AI, machine learning, and natural language processing.
  • Experience with leveraging, training, and fine-tuning foundation models, including multimodal inputs and outputs.
  • Strong experience working with key LLM model APIs (e.g., OpenAI, Anthropic) and LLM frameworks (e.g., LangChain, Llama Index).
  • Experience with multi-agent systems and frameworks.
  • Expertise in using libraries like unstructured.io for handling various document formats.
  • Proficiency in building and querying knowledge bases using Llama Index and related indexing strategies.
  • Knowledge of text chunking techniques for optimal processing and indexing of large documents.
  • Proficiency in generating and working with text embeddings using models like BERT, GPT, or domain-specific embedding models.
  • Experience in constructing and querying knowledge graphs, including technologies like Neo4j or RDF triple stores, and understanding of ontology design.
  • Expertise in RAG (Retrieval-Augmented Generation) systems, vector databases, and semantic search.
  • Analytical and Problem-Solving: Excellent analytical and problem-solving skills.
  • Experience with data analytics tools and technologies is a plus.

Communication and Collaboration:

  • Strong verbal and written communication skills.
  • Demonstrated ability to lead and collaborate with cross-functional teams.
  • Deep understanding of customer needs and user experience.

Strategic and Organizational:

  • Strategic thinker who can align product goals with business objectives.
  • Strong organizational and time management skills.
  • Flexibility to adapt to changing priorities and market conditions.
  • Knowledge of industry trends, market research, and competitive analysis.

Tools Proficiency:

  • Proficiency with product management tools (e.g., Jira, Trello, Confluence) and wireframing/prototyping
    tools (e.g., Figma, Adobe XD).

Preferred Qualifications:

  • Experience in survey automation, data analytics, or similar domains.
  • Certified Scrum Product Owner (CSPO) or similar Agile certifications.
  • Familiarity with frontend and backend technologies, database management, and API integrations.
  • Understanding user experience (UX) and user interface (UI) design principles.

Personal Attributes:

  • Detail-oriented: Meticulous attention to detail in defining and prioritizing product features.
  • Adaptable: Ability to thrive in a fast-paced, dynamic environment with changing priorities.
  • Collaborative: Strong team player with a collaborative mindset.
  • Proactive: Self-motivated with a proactive approach to identifying opportunities and solving problems.
Submit Your Resume

6. Full Stack (Python + React or React + Java)

Job Description

3+ years of work experience in the following:

Roles And Responsibilities:

  • You will bring expertise in full stack development, Performance, and stability of custom-built
    products.
  • This Senior IT Full Stack Engineer oversees all aspects of backend development including API and
    core functional logic, with add on skill of frontend development.
  • This may include API development, backend development in technologies like python, React,
    Java.
  • Work would also involve documentation of all designs and development. It would be required to
    attend agile ceremonies and demo/present the development work incrementally.

Relevant Information:

  • Team is working in multiple time zones. It would be expected to work in a shift which provides some
    overlap with US east coast hours.
  • Experience in Agile is expected.
  • Familiarity with databases/Data warehouse is expected (e.g. Snowflake, AWS DynamoDB, RDS, MySQL,
    MongoDB), web servers (e.g. Apache) and UI/UX design.
  • Hands on experience in Python, AWS, React and Java API Development.
Submit Your Resume

7. Gen AI Engineer

Job Description

3+ years of work experience in the following:

We are seeking a skilled and innovative Generative AI Engineer to join our dynamic team. The
primary responsibility of the successful candidate will be to design, develop, and implement
cutting-edge generative AI applications using various techniques and frameworks available for
building AI apps. The ideal candidate will have a deep understanding of large language models
(LLMs), possess advanced Python programming skills, and be knowledgeable about various
models available in the market, both commercial and open-source.

Key Responsibilities:

  • Design, develop, and deploy generative AI applications using Lang chain, Llama Index, and
    other relevant frameworks.
  • Collaborate with cross-functional teams to understand business requirements and translate
    them into technical solutions.
  • Conduct research and stay up-to-date with advancements in generative AI and large language
    models.
  • Evaluate and experiment with different LLMs to determine the best fit for specific use cases.
  • Optimize AI models for performance, scalability, and robustness. Implement and integrate AI
    models into existing and new applications.
  • Develop and maintain comprehensive documentation for AI models and applications.
  • Troubleshoot and resolve issues related to AI models and applications.
  • Apply various prompt engineering techniques to improve the performance and accuracy of AI
    models.
  • Build and enhance chatbots, conversational agents, and retrieval-augmented generation
    (RAG) applications.
  • Test generative AI applications for accuracy and ensure models do not hallucinate while
    generating responses.
  • Implement techniques to add guardrails, ensuring that the application produces desired and
    safe results.
  • Utilize embeddings and vector databases to enhance the functionality of generative AI.
    applications.

Required Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning,
    or a related field.
  • Proven experience in developing AI applications using frameworks like Lang chain and Llama
    Index.
  • In-depth knowledge of large language models, including but not limited to OpenAI’s GPT,
    Google BERT, and other commercial and open-source models.
  • Strong programming skills in Python and familiarity with AI/ML libraries such as TensorFlow,
    PyTorch, and Hugging Face.
  • Experience with natural language processing (NLP) and natural language understanding
    (NLU).
  • Familiarity with cloud platforms and services such as AWS, GCP, or Azure especially AI related services like Amazon Bedrock, Azure Cognitive Search etc.
  • Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment.
  • Strong written and verbal communication skills.
  • Experience in building chatbots, conversational agents, and retrieval-augmented generation (RAG) applications.
  • Experience in testing generative AI applications for accuracy and ensuring model reliability.
  • Knowledge of techniques to add guardrails for safe and desired application outcomes.
  • Understanding of embeddings and vector databases and their application in generative AI.
  • Proficiency in various prompt engineering techniques to enhance AI model performance.

Preferred Qualifications:

  • Experience with deploying AI models in production environments.
  • Knowledge of vector databases and retrieval-augmented generation (RAG) techniques.
  • Understanding of ethical considerations and best practices in AI development.
  • Experience with containerization technologies such as Docker and Kubernetes.
  • Understanding of model hallucination issues and methods to mitigate them.
Submit Your Resume

Want to know about us?