Remote Data Mining And Management Job In Data Science And Analytics

Build out Advance Analytics and BI Platform

Find more Data Mining And Management remote jobs posted recently Worldwide

I will share my use case and am looking to build out Advance analytics and BI platform thats cost-effective yet viable (capable of working successfully) and scalable with shortliste candidates.

Current Use Case is to collect data from CRM service providers like SalesForce &
Microsoft CRM 365 and then transform into meaningful data by logically joining
different entities received and then persisting it into Data Warehouse.
Business Intelligence Dashboards will be developed and integrate it with Data
Warehouse and perform analysis using analytical queries (joining, grouping, sorting
etc.

Along with this, the Advanced Analytics Platform will be developed in which Data Science
The team will perform basic analysis first and Then build & train their Machine Learning
Models on top of it for Predictive Analytics & Recommendation Engines.

Create below modules -

1. Data Sources Management
Using this Module, User will be able to configure their Data Sources from which Data
needs to be collected.
Once User configure the Data Sources, Data Ingestion Job will be submitted to Apache
Gobblin and Gobblin will start collecting data from Data Sources.

2. Real-Time Analytics
Once Ingested Data is available on Kafka Streams, Structured Spark Streaming will be
used to process & transform it in a distributed way and write that to MariaDB Column
Store.

3. Data Lake
Data Lake will be required as Lot of Data will be ingested and Data Warehouse will be
having cleaned & transformed version of data .
All Data collected from CRM Data Sources will be coming in JSON format . So Apache
Gobblin will convert the json data into parquet format before loading in Data Lake for
better I/O & Low Latency Reads

4. Data Processing
For Data Processing, Apache Spark will be used which is distributed data processing
engine and Data Processing Jobs will be scheduled using Apache Airflow and it will
read latest data from Data Lake and apply required transformations and then persist the
data to Data Warehouse.

5. Data Warehousing
For Data Warehouse, Hive on Minio will be used and File Format will be Parquet. The hive
will act as a MetaStore and Schemas will be defined in it for various tables and Tables
will be pointing to their corresponding Minio Storage Location.

Business Intelligence
Both Querying Engines i.e. Spark sql on Hive and MariaDb ColumnStore supports JDBC .
So , Any BI Tool can connect to them using Standard JDBC Connections and execute
analytics queries and create various charts/graphs.
About the recuiter
Member since May 20, 2018
N. Sridhar
from Cluj, Romania

Skills & Expertise Required

Apache Hive Apache Kafka Apache Spark Backend Rest API Minio 

Candidate shortlisted and hiredHiring open till - Apr 26, 2024

Work from Anywhere

40 hrs / week

Fixed Type

Remote Job

$19,161.68

Cost

Looking for help? Checkout our video tutorial
How to search and apply for jobs

How to apply? Do you have more questions about the Job?
See frequently asked questions

Similar Projects

Converting JSON or Avro files to Parquet

I need to convert JSON, Avro or other row-based format files in S3 into Parquet columnar store formats using an AWS service like EMR or Glue.

I already have code that converts JSON to parquet using Python but the process is very manual, acco...read more

JTL Wawi x Versacommerce Interface Schnittstelle Developer Restful API

Please Note: Please do only apply on the job, if you have already made extensive experiences with the software JTL Wawi. This is a must have condition for this job!

Hi there,

we run a webshop with german saas provider versacommerce...read more

Hadoop: Need a subject matter expert to review multiple choice questions on big data technologies.

We are looking for consultants to review our Multiple Choice Questions based assessment on the Apache Hadoop and its related technologies. We will share the questions with the expert and he/she will have to critically review the questions.

Looking for Java Article Writer for Our Blog

We are looking for Java article writers for our blog
Should have:
- practical experience in Java
- be fluent English speaker

Hadoop Developer with Scala and java experience

I am looking for a Hadoop developer with Java and scala experience. Looking for someone who have good experience and can take the pressure. This is a full time support and looking for someone who can work new york standard time from Mon thru Friday...read more