Kaggle Deepfakes Competition Result #598/#2265

So, in the top 27% with a weeks work, isn’t so bad, however it isn’t so good either. It did though give me some useful experience in convolutional hyperparameter optimization with a pre-trained weights ResNeXt 50 with a sigmoid (two-class logistic regression) replacing the softmax classifier (top=False). Pretty simple stuff, but better than almost 75% of competitors.

Leave a Reply

Your email address will not be published. Required fields are marked *