Compete in Kaggle cheaply!
Use a local basic server for proof-of-concept (with restricted hyperparameters) and then deploy the models to GCP or AWS in the cloud.
A useful way of doing this is to use a git repository, both for versioning and to have a seamless data environment between local and cloud systems.
Look for something like a 2007 Dell Workstation, an T5400 or T7400, both of these are cheap, can take 32GB of (dirt cheap PC2-5300E) RAM, 2×3.5″ SATA drives and a full size GPU/Video Card, with 2 x PCI slots. They ar also built superbly, rock solid reliable platforms that have 1 or 2 Xeon CPU’s, including the X series, so you will have some power behind you even in 2020. Performance of the (2007) single Xeon E5410 is around the same as a current 2020 low-end i5, and with 32GB of RAM the system runs pretty snappy, coupled with a GTX 980 (4GB) a usable system. With more/better CPU’s you are looking at great performance.
Dell Professional Workstation T5400 32 GB Single Xeon (4 core) $80, add 80 GB (OS) and 500 GB (Data) drives (both 7200) $30, Used MSI GTX 980 4GB at $80, running XUbuntu 18.04 and Anaconda stack. Connected via WLAN to other resources (and the Internet for remote processing and Jupyter).
You can also add a RAID or SAS card (with 15K drive capability) – these cards are dirt cheap as well, usually below $20.
With a huge 900W+ professional grade PSU this machine can handle even the heavy duty 300W cards that were pre Maxwell (Titans/GTX 690).
Just a demonstration of how deep learning of fairly non-trivial networks can be performed on cheap hardware picked up off eBay.
Remember though Kaggle (owned by Google) often offer free GCP and TPU credits… at the moment fore example I have around $300 of GCP and unlimited TPU credits (time limited for the next few months) – so check out these freebies as well.