Our team is looking to develop a pipeline to ingest our image data for the training of vision algorithms over AWS (though we are open to other platforms such as GCP if there is a good case to be made\u2026 our inclination to move towards AWS is due to Sagemaker and their ability to support Tesla V100 GPUs.). We are well versed in machine learning and are looking someone with strong DevOps skills with decent experience in machine learning to help set this infrastructure up as our biggest barrier to this is time.
Our goal is to implement five open source deep learning algorithms on the cloud and to have them configured to ingest data from an S3 bucket and output results to a specified directory structure on S3 as well. We've defined ahead of time the project architecture and where everything should be saved so you will just have to ensure that these algorithms can run on any GPU powered instance and have their output saved to the right S3 buckets. Since the algorithms require a variety of packages, we ask that this particular docker container be used to alleviate package dependency issues
https://github.com/floydhub/dl-docker and that there is a system to enable this container on any Nvidia GPU based instance (though additional dependencies might need to be installed to support these algorithms). The algorithms shouldn't need to be modified heavily from their original code, outside of pointing them to the right location and tested to ensure they can be trained / tested on data read from S3 (we can certainly help with this part). We would consider the project completed if all five algorithms could be successfully trained and validated on test data for some some small datasets that we will provide on S3 and that this can be replicated on another instance.
About the recuiterMember since Jul 3, 2017 Ruhl R R.
from Bergamo, Italy