These are just random thoughts that have been occurring to me the last few days! They are subjective so many will disagree with some comments I am sure. ML Stacks ML Stacks are so fragile that any modification of the environment can bring the whole show crashing down, cue the
First task is to extract the data required, at least the image components, sound will come later. The augmented data processing with dlib will take perhaps a few days or so, with roughly 1,000,000 images generated from the video files for training and test datasets. I used a GCP low
This competition requires you to distinguish between real and fake videos, both in video and in sound. Final position is based upon your ability to filter the real vs fakes from a 4000 video dataset. You are provided with approximately 100,000 training/test set videos. Submitted to qualify for free GCP
I undertook this in the hope that a little Linear Algebra (and Vector Calculus, the next module) would assist in understanding papers in the area. The course itself progressed from elementary Vector algebra through to calculating Eigenvalues/vectors. Found it to be quite focused and sometimes moving at a fast pace.
Having used TensorFlow on GCP via Compute Engine instances (usually pre-built Ubuntu images with K80’s/P4/T4’s) for image processing work I realized that some guidance in best practices would be quite useful as there may have been areas that I had missed – additionally given the wide range of GCP ML