Find more Data Mining And Management Remote Jobs posted recently Worldwide

Required Apache Hive,Apache Kafka,Apache Spark,Backend Rest API,Minio freelancer for Build out Advance Analytics and BI Platform job

Posted at - Aug 10, 2022


I will share my use case and am looking to build out Advance analytics and BI platform that's cost-effective yet viable (capable of working successfully) and scalable with shortliste candidates.

Current Use Case is to collect data from CRM service providers like SalesForce &
Microsoft CRM 365 and then transform into meaningful data by logically joining
different entities received and then persisting it into Data Warehouse.
Business Intelligence Dashboards will be developed and integrate it with Data
Warehouse and perform analysis using analytical queries (joining, grouping, sorting
etc.

Along with this, the Advanced Analytics Platform will be developed in which Data Science
The team will perform basic analysis first and Then build & train their Machine Learning
Models on top of it for Predictive Analytics & Recommendation Engines.

Create below modules -

1. Data Sources Management
Using this Module, User will be able to configure their Data Sources from which Data
needs to be collected.
Once User configure the Data Sources, Data Ingestion Job will be submitted to Apache
Gobblin and Gobblin will start collecting data from Data Sources.

2. Real-Time Analytics
Once Ingested Data is available on Kafka Streams, Structured Spark Streaming will be
used to process & transform it in a distributed way and write that to MariaDB Column
Store.

3. Data Lake
Data Lake will be required as Lot of Data will be ingested and Data Warehouse will be
having cleaned & transformed version of data .
All Data collected from CRM Data Sources will be coming in JSON format . So Apache
Gobblin will convert the json data into parquet format before loading in Data Lake for
better I/O & Low Latency Reads

4. Data Processing
For Data Processing, Apache Spark will be used which is distributed data processing
engine and Data Processing Jobs will be scheduled using Apache Airflow and it will
read latest data from Data Lake and apply required transformations and then persist the
data to Data Warehouse.

5. Data Warehousing
For Data Warehouse, Hive on Minio will be used and File Format will be Parquet. The hive
will act as a MetaStore and Schemas will be defined in it for various tables and Tables
will be pointing to their corresponding Minio Storage Location.

Business Intelligence
Both Querying Engines i.e. Spark sql on Hive and MariaDb ColumnStore supports JDBC .
So , Any BI Tool can connect to them using Standard JDBC Connections and execute
analytics queries and create various charts/graphs.

About the recuiterMember since May 20, 2018 Sal Jivraj
from Ekiti, Nigeria

Skills & Expertise Required

Apache Hive Apache Kafka Apache Spark Backend Rest API Minio 

Candidate shortlisted and hiredHiring open till - Sep 9, 2022

Work from Anywhere
40 hrs / week
Fixed Type
Remote Job
$13,913.88
Cost

Looking for help? Checkout our video tutorial
How to search and apply for jobs

How to apply? Do you have more questions about the Job?
See frequently asked questions


Apply on more work from home jobs posted in Data Mining And Management category.


Related Jobs


Latest In Apache Hive Jobs


Latest In Apache Kafka Jobs


Latest In Apache Spark Jobs


Latest In Backend Rest API Jobs


Latest In Minio Jobs